Test Report: Docker_Linux_crio 12230

                    
                      b85c4fe0fcec6d00161b49ecbfd8182c89122b1a:2021-08-16:20050
                    
                

Test fail (15/262)

x
+
TestAddons/parallel/Ingress (303.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: waiting 12m0s for pods matching "app.kubernetes.io/name=ingress-nginx" in namespace "ingress-nginx" ...
helpers_test.go:343: "ingress-nginx-admission-create-c9pwv" [deebb947-63ac-46ac-a67e-e6cafe37f501] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: app.kubernetes.io/name=ingress-nginx healthy within 3.731721ms
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210816214127-6487 replace --force -f testdata/nginx-ingv1.yaml
addons_test.go:180: (dbg) Run:  kubectl --context addons-20210816214127-6487 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: waiting 4m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [7093c73d-d7a1-4648-b2a5-8ec26a71fa88] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:343: "nginx" [7093c73d-d7a1-4648-b2a5-8ec26a71fa88] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.00471549s
addons_test.go:204: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210816214127-6487 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:204: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-20210816214127-6487 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.293125509s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:224: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210816214127-6487 replace --force -f testdata/nginx-ingv1.yaml
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210816214127-6487 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-20210816214127-6487 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.790761485s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:262: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:265: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210816214127-6487 addons disable ingress --alsologtostderr -v=1
addons_test.go:265: (dbg) Done: out/minikube-linux-amd64 -p addons-20210816214127-6487 addons disable ingress --alsologtostderr -v=1: (28.552549346s)
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect addons-20210816214127-6487
helpers_test.go:236: (dbg) docker inspect addons-20210816214127-6487:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "faf0978ccbf3899282c5e6b119a043e3e3a2286a841759f21fd9355ac5645250",
	        "Created": "2021-08-16T21:41:29.74483609Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8076,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-16T21:41:32.516037322Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/faf0978ccbf3899282c5e6b119a043e3e3a2286a841759f21fd9355ac5645250/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/faf0978ccbf3899282c5e6b119a043e3e3a2286a841759f21fd9355ac5645250/hostname",
	        "HostsPath": "/var/lib/docker/containers/faf0978ccbf3899282c5e6b119a043e3e3a2286a841759f21fd9355ac5645250/hosts",
	        "LogPath": "/var/lib/docker/containers/faf0978ccbf3899282c5e6b119a043e3e3a2286a841759f21fd9355ac5645250/faf0978ccbf3899282c5e6b119a043e3e3a2286a841759f21fd9355ac5645250-json.log",
	        "Name": "/addons-20210816214127-6487",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-20210816214127-6487:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-20210816214127-6487",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1b029631cb46571fe4430ecc8489bc6b0b5f9357b9d0799ebac958fe8040fc76-init/diff:/var/lib/docker/overlay2/26ccb05b45c39eb8fb9a222c35182ae0b2281818311893cc1c820d93f21711ae/diff:/var/lib/docker/overlay2/cd717b1332cc33945d2b9d777346faa34d6bdfa47023daf36a4373532eccf421/diff:/var/lib/docker/overlay2/d9b45e6cc99a52e8f81dfe312587e9103ceac6c138634c90ec0d647cd90dcdc6/diff:/var/lib/docker/overlay2/253a269adcaa7871d92fb351ceb4d32421104cd62f741ba1c63d8e3f2e19a0b5/diff:/var/lib/docker/overlay2/42adedc2a706910fdba591e62cbfab57dd03d85b882f6eef38c3e131a0c52bea/diff:/var/lib/docker/overlay2/77ed2008192a16e91da5c6b9ccc6e71cea338a7cb24d331dc695e5f74fb04573/diff:/var/lib/docker/overlay2/29dccf868a296ce0889c929159fd3ac0dcdc00ba6e2ef864b056335954ffab4d/diff:/var/lib/docker/overlay2/592818f0a9ab8c99c2438562d9c5a479083cb08f41ad30ddefa9b493a8098150/diff:/var/lib/docker/overlay2/05fa44df096fbdb60f935bef69e952369e10cd8f1adc570c170c1f08d09743f7/diff:/var/lib/docker/overlay2/65cf48
1c471f231a663edc6331272317a851b0fd3395b2d6be43899f9b5443f9/diff:/var/lib/docker/overlay2/86a49202a7a2abea1a7a57c52f2e19cf991e9319b11dcf7f4105892413e0096e/diff:/var/lib/docker/overlay2/6f1a2a1805e90af512eef163f484935d57b0a0a72a65b5775c38efb5fe937d13/diff:/var/lib/docker/overlay2/f514eb992d78367550d2ce1c020771f984eff047302d6af3dcd71172270c0087/diff:/var/lib/docker/overlay2/cdae383a0302a2ed835985028caad813d6e16c7caf0e146005b47a4a1e94369b/diff:/var/lib/docker/overlay2/1fe02987c4f18db439e026431c44d1b8b233da766c4dcd331a4c20fd91e88748/diff:/var/lib/docker/overlay2/28bb6b828668bb682f3523f9f71fbd429ef6180cd98455625ea014ef4e532b77/diff:/var/lib/docker/overlay2/2878ae01f9dccb219097c2fc5873be677fe8a70e2bd936d25f018483a3d6c1a3/diff:/var/lib/docker/overlay2/da86efa9bad2d8b46c88440b1c4864add391f7cc9ee07d4227dd37d76c9ffd85/diff:/var/lib/docker/overlay2/93bf17ee1f2be70c70aa38b3aea9daa3a62e189181d1aa56acc52f5cf90f7436/diff:/var/lib/docker/overlay2/e8db603cd7ef789cc310bf7782375ee624d80ac0bceb3a3075900e67540c152a/diff:/var/lib/d
ocker/overlay2/1ba1c50d72480f49b835b34db6c3b21bccdf6530a8201631099a7e287dbf94a6/diff:/var/lib/docker/overlay2/2eeb0aecf508bc4a797d240b330bfa5b54b84a085737a96dad9faf9ea129d4ce/diff:/var/lib/docker/overlay2/40ba0dc970df76cce520350594f86384e121b47a190106559752e8f7cf138540/diff:/var/lib/docker/overlay2/8b80e2d0e53fb136919fd9ca7d4d198a92c324a19aa1827fcd672a3aa49a0dcc/diff:/var/lib/docker/overlay2/bd1d10c9a8876242ad61e3841c439177c3ae3a1967e5b053729107676134c622/diff:/var/lib/docker/overlay2/1f65ee5880c0ce6e027f2e72118cb956fcdcba9bb0806d98de6532bcbdfac6dd/diff:/var/lib/docker/overlay2/8bde131df5f26fa8ad65d4ddcb9ddb84de567427ffeb4aee76487489e530ef77/diff:/var/lib/docker/overlay2/73a68cda04f65298e5bd2581f0f16a8de96cd42145968d5c39b0c8ae3228a423/diff:/var/lib/docker/overlay2/0acfb650c0dfd3b9df005fe48f305b155d61d21914271c258e95bf42636e9bf8/diff:/var/lib/docker/overlay2/33e44b42ef88ffe285cb67e240248a629167dd1c83362d51a67679cbd866c30b/diff:/var/lib/docker/overlay2/c71ddf9e0475e4a3987bf3d616723e9d055a79c4bb6d92cf34fc67be2d9
c5134/diff:/var/lib/docker/overlay2/faece61d8cf45bab42cd499befcdb8d4548229311b35600cced068a3ce186025/diff:/var/lib/docker/overlay2/5e45db6ab7620f8baf25ff5ff01c806cd64575cdc89e5bb2965a25800093d5f9/diff:/var/lib/docker/overlay2/7324fe47bf39776cd61539f09f63fbb6f5e17aedc985bcc48bc12cdaaff4c4e4/diff:/var/lib/docker/overlay2/98a50fa9fa33654b2ed4ce8f7118f04d7004260519abca0d403b2ab337e9c273/diff:/var/lib/docker/overlay2/0e38707c109f3f1c54c7190021d525f898a86b07f2e51c86dc3e6d916eff3ed7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1b029631cb46571fe4430ecc8489bc6b0b5f9357b9d0799ebac958fe8040fc76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1b029631cb46571fe4430ecc8489bc6b0b5f9357b9d0799ebac958fe8040fc76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1b029631cb46571fe4430ecc8489bc6b0b5f9357b9d0799ebac958fe8040fc76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-20210816214127-6487",
	                "Source": "/var/lib/docker/volumes/addons-20210816214127-6487/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-20210816214127-6487",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-20210816214127-6487",
	                "name.minikube.sigs.k8s.io": "addons-20210816214127-6487",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9fc861a3b17e582ece2ab260241ba0e0bf2c94d66e573a9333b169cb3261e518",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9fc861a3b17e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-20210816214127-6487": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "faf0978ccbf3"
	                    ],
	                    "NetworkID": "427be9125db37ddb064a12035eab82a09dfbfc6c2a862093db05cc79c7c071ca",
	                    "EndpointID": "9a569924456b7b8b09326fdf6682b812d405a59cacfbfbbfe30424d175ea2db6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-20210816214127-6487 -n addons-20210816214127-6487
helpers_test.go:245: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210816214127-6487 logs -n 25
helpers_test.go:253: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                Args                 |               Profile               |  User   | Version |          Start Time           |           End Time            |
	|---------|-------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | --all                               | download-only-20210816214057-6487   | jenkins | v1.22.0 | Mon, 16 Aug 2021 21:41:18 UTC | Mon, 16 Aug 2021 21:41:18 UTC |
	| delete  | -p                                  | download-only-20210816214057-6487   | jenkins | v1.22.0 | Mon, 16 Aug 2021 21:41:18 UTC | Mon, 16 Aug 2021 21:41:19 UTC |
	|         | download-only-20210816214057-6487   |                                     |         |         |                               |                               |
	| delete  | -p                                  | download-only-20210816214057-6487   | jenkins | v1.22.0 | Mon, 16 Aug 2021 21:41:19 UTC | Mon, 16 Aug 2021 21:41:19 UTC |
	|         | download-only-20210816214057-6487   |                                     |         |         |                               |                               |
	| delete  | -p                                  | download-docker-20210816214119-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 21:41:27 UTC | Mon, 16 Aug 2021 21:41:27 UTC |
	|         | download-docker-20210816214119-6487 |                                     |         |         |                               |                               |
	| start   | -p addons-20210816214127-6487       | addons-20210816214127-6487          | jenkins | v1.22.0 | Mon, 16 Aug 2021 21:41:27 UTC | Mon, 16 Aug 2021 21:43:48 UTC |
	|         | --wait=true --memory=4000           |                                     |         |         |                               |                               |
	|         | --alsologtostderr                   |                                     |         |         |                               |                               |
	|         | --addons=registry                   |                                     |         |         |                               |                               |
	|         | --addons=metrics-server             |                                     |         |         |                               |                               |
	|         | --addons=olm                        |                                     |         |         |                               |                               |
	|         | --addons=volumesnapshots            |                                     |         |         |                               |                               |
	|         | --addons=csi-hostpath-driver        |                                     |         |         |                               |                               |
	|         | --driver=docker                     |                                     |         |         |                               |                               |
	|         | --container-runtime=crio            |                                     |         |         |                               |                               |
	|         | --addons=ingress                    |                                     |         |         |                               |                               |
	|         | --addons=helm-tiller                |                                     |         |         |                               |                               |
	| -p      | addons-20210816214127-6487          | addons-20210816214127-6487          | jenkins | v1.22.0 | Mon, 16 Aug 2021 21:44:02 UTC | Mon, 16 Aug 2021 21:44:11 UTC |
	|         | addons enable gcp-auth --force      |                                     |         |         |                               |                               |
	| -p      | addons-20210816214127-6487          | addons-20210816214127-6487          | jenkins | v1.22.0 | Mon, 16 Aug 2021 21:44:16 UTC | Mon, 16 Aug 2021 21:44:17 UTC |
	|         | addons disable metrics-server       |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	| -p      | addons-20210816214127-6487          | addons-20210816214127-6487          | jenkins | v1.22.0 | Mon, 16 Aug 2021 21:44:21 UTC | Mon, 16 Aug 2021 21:44:21 UTC |
	|         | addons disable helm-tiller          |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	| -p      | addons-20210816214127-6487 ip       | addons-20210816214127-6487          | jenkins | v1.22.0 | Mon, 16 Aug 2021 21:44:24 UTC | Mon, 16 Aug 2021 21:44:24 UTC |
	| -p      | addons-20210816214127-6487          | addons-20210816214127-6487          | jenkins | v1.22.0 | Mon, 16 Aug 2021 21:44:24 UTC | Mon, 16 Aug 2021 21:44:24 UTC |
	|         | addons disable registry             |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	| -p      | addons-20210816214127-6487          | addons-20210816214127-6487          | jenkins | v1.22.0 | Mon, 16 Aug 2021 21:45:02 UTC | Mon, 16 Aug 2021 21:45:08 UTC |
	|         | addons disable gcp-auth             |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	| -p      | addons-20210816214127-6487          | addons-20210816214127-6487          | jenkins | v1.22.0 | Mon, 16 Aug 2021 21:45:06 UTC | Mon, 16 Aug 2021 21:45:13 UTC |
	|         | addons disable                      |                                     |         |         |                               |                               |
	|         | csi-hostpath-driver                 |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	| -p      | addons-20210816214127-6487          | addons-20210816214127-6487          | jenkins | v1.22.0 | Mon, 16 Aug 2021 21:45:13 UTC | Mon, 16 Aug 2021 21:45:14 UTC |
	|         | addons disable volumesnapshots      |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	| -p      | addons-20210816214127-6487          | addons-20210816214127-6487          | jenkins | v1.22.0 | Mon, 16 Aug 2021 21:48:50 UTC | Mon, 16 Aug 2021 21:49:18 UTC |
	|         | addons disable ingress              |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	|---------|-------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 21:41:27
	Running on machine: debian-jenkins-agent-13
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 21:41:27.267637    7436 out.go:298] Setting OutFile to fd 1 ...
	I0816 21:41:27.268131    7436 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 21:41:27.268144    7436 out.go:311] Setting ErrFile to fd 2...
	I0816 21:41:27.268151    7436 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 21:41:27.268394    7436 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 21:41:27.268973    7436 out.go:305] Setting JSON to false
	I0816 21:41:27.302181    7436 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-13","uptime":1254,"bootTime":1629148833,"procs":134,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 21:41:27.302262    7436 start.go:121] virtualization: kvm guest
	I0816 21:41:27.304537    7436 out.go:177] * [addons-20210816214127-6487] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 21:41:27.306180    7436 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 21:41:27.304671    7436 notify.go:169] Checking for updates...
	I0816 21:41:27.307642    7436 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 21:41:27.309135    7436 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 21:41:27.310490    7436 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 21:41:27.310696    7436 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 21:41:27.353191    7436 docker.go:132] docker version: linux-19.03.15
	I0816 21:41:27.353265    7436 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 21:41:27.427326    7436 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-16 21:41:27.384546661 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 21:41:27.427406    7436 docker.go:244] overlay module found
	I0816 21:41:27.429509    7436 out.go:177] * Using the docker driver based on user configuration
	I0816 21:41:27.429530    7436 start.go:278] selected driver: docker
	I0816 21:41:27.429535    7436 start.go:751] validating driver "docker" against <nil>
	I0816 21:41:27.429552    7436 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0816 21:41:27.429585    7436 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 21:41:27.429603    7436 out.go:242] ! Your cgroup does not allow setting memory.
	I0816 21:41:27.430885    7436 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 21:41:27.431639    7436 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 21:41:27.504764    7436 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-16 21:41:27.463775524 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 21:41:27.504860    7436 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0816 21:41:27.505009    7436 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 21:41:27.505034    7436 cni.go:93] Creating CNI manager for ""
	I0816 21:41:27.505041    7436 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 21:41:27.505051    7436 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 21:41:27.505060    7436 start_flags.go:277] config:
	{Name:addons-20210816214127-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210816214127-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 21:41:27.507087    7436 out.go:177] * Starting control plane node addons-20210816214127-6487 in cluster addons-20210816214127-6487
	I0816 21:41:27.507126    7436 cache.go:117] Beginning downloading kic base image for docker with crio
	I0816 21:41:27.508662    7436 out.go:177] * Pulling base image ...
	I0816 21:41:27.508692    7436 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0816 21:41:27.508721    7436 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0816 21:41:27.508733    7436 cache.go:56] Caching tarball of preloaded images
	I0816 21:41:27.508784    7436 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0816 21:41:27.508916    7436 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 21:41:27.508932    7436 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0816 21:41:27.509211    7436 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/config.json ...
	I0816 21:41:27.509242    7436 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/config.json: {Name:mkaa92f8d1caa3992204d0f287a27000c4254395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 21:41:27.587824    7436 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0816 21:41:27.587850    7436 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0816 21:41:27.587866    7436 cache.go:205] Successfully downloaded all kic artifacts
	I0816 21:41:27.587897    7436 start.go:313] acquiring machines lock for addons-20210816214127-6487: {Name:mk15c2dc3c59147eeee3362c067810d3954566fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 21:41:27.588051    7436 start.go:317] acquired machines lock for "addons-20210816214127-6487" in 116.964µs
	I0816 21:41:27.588076    7436 start.go:89] Provisioning new machine with config: &{Name:addons-20210816214127-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210816214127-6487 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 21:41:27.588148    7436 start.go:126] createHost starting for "" (driver="docker")
	I0816 21:41:27.590304    7436 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0816 21:41:27.590498    7436 start.go:160] libmachine.API.Create for "addons-20210816214127-6487" (driver="docker")
	I0816 21:41:27.590523    7436 client.go:168] LocalClient.Create starting
	I0816 21:41:27.590605    7436 main.go:130] libmachine: Creating CA: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem
	I0816 21:41:27.864664    7436 main.go:130] libmachine: Creating client certificate: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem
	I0816 21:41:28.072259    7436 cli_runner.go:115] Run: docker network inspect addons-20210816214127-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0816 21:41:28.107212    7436 cli_runner.go:162] docker network inspect addons-20210816214127-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0816 21:41:28.107295    7436 network_create.go:255] running [docker network inspect addons-20210816214127-6487] to gather additional debugging logs...
	I0816 21:41:28.107316    7436 cli_runner.go:115] Run: docker network inspect addons-20210816214127-6487
	W0816 21:41:28.139660    7436 cli_runner.go:162] docker network inspect addons-20210816214127-6487 returned with exit code 1
	I0816 21:41:28.139699    7436 network_create.go:258] error running [docker network inspect addons-20210816214127-6487]: docker network inspect addons-20210816214127-6487: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20210816214127-6487
	I0816 21:41:28.139717    7436 network_create.go:260] output of [docker network inspect addons-20210816214127-6487]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20210816214127-6487
	
	** /stderr **
	I0816 21:41:28.139765    7436 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 21:41:28.171890    7436 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000114128] misses:0}
	I0816 21:41:28.171962    7436 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0816 21:41:28.171982    7436 network_create.go:106] attempt to create docker network addons-20210816214127-6487 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0816 21:41:28.172025    7436 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20210816214127-6487
	I0816 21:41:28.241242    7436 network_create.go:90] docker network addons-20210816214127-6487 192.168.49.0/24 created
	I0816 21:41:28.241269    7436 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20210816214127-6487" container
	I0816 21:41:28.241327    7436 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0816 21:41:28.274090    7436 cli_runner.go:115] Run: docker volume create addons-20210816214127-6487 --label name.minikube.sigs.k8s.io=addons-20210816214127-6487 --label created_by.minikube.sigs.k8s.io=true
	I0816 21:41:28.308801    7436 oci.go:102] Successfully created a docker volume addons-20210816214127-6487
	I0816 21:41:28.308868    7436 cli_runner.go:115] Run: docker run --rm --name addons-20210816214127-6487-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210816214127-6487 --entrypoint /usr/bin/test -v addons-20210816214127-6487:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0816 21:41:29.617949    7436 cli_runner.go:168] Completed: docker run --rm --name addons-20210816214127-6487-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210816214127-6487 --entrypoint /usr/bin/test -v addons-20210816214127-6487:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib: (1.309049333s)
	I0816 21:41:29.617975    7436 oci.go:106] Successfully prepared a docker volume addons-20210816214127-6487
	W0816 21:41:29.618008    7436 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0816 21:41:29.618017    7436 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0816 21:41:29.618030    7436 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0816 21:41:29.618059    7436 kic.go:179] Starting extracting preloaded images to volume ...
	I0816 21:41:29.618061    7436 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0816 21:41:29.618115    7436 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-20210816214127-6487:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0816 21:41:29.709108    7436 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20210816214127-6487 --name addons-20210816214127-6487 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210816214127-6487 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20210816214127-6487 --network addons-20210816214127-6487 --ip 192.168.49.2 --volume addons-20210816214127-6487:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0816 21:41:32.523812    7436 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20210816214127-6487 --name addons-20210816214127-6487 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210816214127-6487 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20210816214127-6487 --network addons-20210816214127-6487 --ip 192.168.49.2 --volume addons-20210816214127-6487:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6: (2.814625946s)
	I0816 21:41:32.523946    7436 cli_runner.go:115] Run: docker container inspect addons-20210816214127-6487 --format={{.State.Running}}
	I0816 21:41:32.563836    7436 cli_runner.go:115] Run: docker container inspect addons-20210816214127-6487 --format={{.State.Status}}
	I0816 21:41:32.603769    7436 cli_runner.go:115] Run: docker exec addons-20210816214127-6487 stat /var/lib/dpkg/alternatives/iptables
	I0816 21:41:32.731984    7436 oci.go:278] the created container "addons-20210816214127-6487" has a running status.
	I0816 21:41:32.732015    7436 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210816214127-6487/id_rsa...
	I0816 21:41:33.108656    7436 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210816214127-6487/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0816 21:41:33.501903    7436 cli_runner.go:115] Run: docker container inspect addons-20210816214127-6487 --format={{.State.Status}}
	I0816 21:41:33.541761    7436 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0816 21:41:33.541781    7436 kic_runner.go:115] Args: [docker exec --privileged addons-20210816214127-6487 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0816 21:41:36.327412    7436 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-20210816214127-6487:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (6.709254128s)
	I0816 21:41:36.327451    7436 kic.go:188] duration metric: took 6.709390 seconds to extract preloaded images to volume
	I0816 21:41:36.327510    7436 cli_runner.go:115] Run: docker container inspect addons-20210816214127-6487 --format={{.State.Status}}
	I0816 21:41:36.362349    7436 machine.go:88] provisioning docker machine ...
	I0816 21:41:36.362382    7436 ubuntu.go:169] provisioning hostname "addons-20210816214127-6487"
	I0816 21:41:36.362432    7436 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816214127-6487
	I0816 21:41:36.397059    7436 main.go:130] libmachine: Using SSH client type: native
	I0816 21:41:36.397253    7436 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0816 21:41:36.397269    7436 main.go:130] libmachine: About to run SSH command:
	sudo hostname addons-20210816214127-6487 && echo "addons-20210816214127-6487" | sudo tee /etc/hostname
	I0816 21:41:36.555555    7436 main.go:130] libmachine: SSH cmd err, output: <nil>: addons-20210816214127-6487
	
	I0816 21:41:36.555632    7436 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816214127-6487
	I0816 21:41:36.590715    7436 main.go:130] libmachine: Using SSH client type: native
	I0816 21:41:36.590847    7436 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0816 21:41:36.590872    7436 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-20210816214127-6487' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20210816214127-6487/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-20210816214127-6487' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 21:41:36.714906    7436 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 21:41:36.714930    7436 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem ServerCertRemotePath:/e
tc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 21:41:36.714950    7436 ubuntu.go:177] setting up certificates
	I0816 21:41:36.714958    7436 provision.go:83] configureAuth start
	I0816 21:41:36.714998    7436 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210816214127-6487
	I0816 21:41:36.750856    7436 provision.go:138] copyHostCerts
	I0816 21:41:36.750918    7436 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 21:41:36.751005    7436 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 21:41:36.751058    7436 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0816 21:41:36.751095    7436 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.addons-20210816214127-6487 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-20210816214127-6487]
	I0816 21:41:36.962691    7436 provision.go:172] copyRemoteCerts
	I0816 21:41:36.962743    7436 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 21:41:36.962777    7436 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816214127-6487
	I0816 21:41:36.997961    7436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210816214127-6487/id_rsa Username:docker}
	I0816 21:41:37.086120    7436 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 21:41:37.104539    7436 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0816 21:41:37.119174    7436 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 21:41:37.133382    7436 provision.go:86] duration metric: configureAuth took 418.416199ms
	I0816 21:41:37.133399    7436 ubuntu.go:193] setting minikube options for container-runtime
	I0816 21:41:37.133518    7436 config.go:177] Loaded profile config "addons-20210816214127-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 21:41:37.133615    7436 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816214127-6487
	I0816 21:41:37.168902    7436 main.go:130] libmachine: Using SSH client type: native
	I0816 21:41:37.169050    7436 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0816 21:41:37.169066    7436 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 21:41:37.762380    7436 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 21:41:37.762408    7436 machine.go:91] provisioned docker machine in 1.40004131s
	I0816 21:41:37.762417    7436 client.go:171] LocalClient.Create took 10.1718871s
	I0816 21:41:37.762434    7436 start.go:168] duration metric: libmachine.API.Create for "addons-20210816214127-6487" took 10.171936646s
	I0816 21:41:37.762445    7436 start.go:267] post-start starting for "addons-20210816214127-6487" (driver="docker")
	I0816 21:41:37.762451    7436 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 21:41:37.762511    7436 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 21:41:37.762552    7436 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816214127-6487
	I0816 21:41:37.800753    7436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210816214127-6487/id_rsa Username:docker}
	I0816 21:41:37.887016    7436 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 21:41:37.889489    7436 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0816 21:41:37.889510    7436 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0816 21:41:37.889520    7436 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0816 21:41:37.889525    7436 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0816 21:41:37.889534    7436 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 21:41:37.889587    7436 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 21:41:37.889613    7436 start.go:270] post-start completed in 127.160924ms
	I0816 21:41:37.889914    7436 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210816214127-6487
	I0816 21:41:37.925209    7436 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/config.json ...
	I0816 21:41:37.925434    7436 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 21:41:37.925482    7436 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816214127-6487
	I0816 21:41:37.959900    7436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210816214127-6487/id_rsa Username:docker}
	I0816 21:41:38.045554    7436 start.go:129] duration metric: createHost completed in 10.457395201s
	I0816 21:41:38.045574    7436 start.go:80] releasing machines lock for "addons-20210816214127-6487", held for 10.457511778s
	I0816 21:41:38.045641    7436 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210816214127-6487
	I0816 21:41:38.080288    7436 ssh_runner.go:149] Run: systemctl --version
	I0816 21:41:38.080338    7436 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 21:41:38.080345    7436 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816214127-6487
	I0816 21:41:38.080384    7436 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816214127-6487
	I0816 21:41:38.121813    7436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210816214127-6487/id_rsa Username:docker}
	I0816 21:41:38.122226    7436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210816214127-6487/id_rsa Username:docker}
	I0816 21:41:38.305222    7436 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0816 21:41:38.321208    7436 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0816 21:41:38.329054    7436 docker.go:153] disabling docker service ...
	I0816 21:41:38.329102    7436 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 21:41:38.337369    7436 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 21:41:38.345851    7436 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 21:41:38.407677    7436 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 21:41:38.470712    7436 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 21:41:38.478658    7436 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 21:41:38.489775    7436 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0816 21:41:38.496629    7436 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0816 21:41:38.496656    7436 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0816 21:41:38.504276    7436 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 21:41:38.510750    7436 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 21:41:38.510788    7436 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 21:41:38.517068    7436 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 21:41:38.522485    7436 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 21:41:38.576732    7436 ssh_runner.go:149] Run: sudo systemctl start crio
	I0816 21:41:38.584575    7436 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 21:41:38.584650    7436 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 21:41:38.587283    7436 start.go:413] Will wait 60s for crictl version
	I0816 21:41:38.587326    7436 ssh_runner.go:149] Run: sudo crictl version
	I0816 21:41:38.722940    7436 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0816 21:41:38.723046    7436 ssh_runner.go:149] Run: crio --version
	I0816 21:41:38.786655    7436 ssh_runner.go:149] Run: crio --version
	I0816 21:41:38.847230    7436 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	I0816 21:41:38.847292    7436 cli_runner.go:115] Run: docker network inspect addons-20210816214127-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 21:41:38.881464    7436 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0816 21:41:38.884555    7436 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 21:41:38.893016    7436 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0816 21:41:38.893078    7436 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 21:41:38.937580    7436 crio.go:424] all images are preloaded for cri-o runtime.
	I0816 21:41:38.937602    7436 crio.go:333] Images already preloaded, skipping extraction
	I0816 21:41:38.937648    7436 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 21:41:38.962977    7436 crio.go:424] all images are preloaded for cri-o runtime.
	I0816 21:41:38.962997    7436 cache_images.go:74] Images are preloaded, skipping loading
	I0816 21:41:38.963070    7436 ssh_runner.go:149] Run: crio config
	I0816 21:41:39.026347    7436 cni.go:93] Creating CNI manager for ""
	I0816 21:41:39.026366    7436 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 21:41:39.026375    7436 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0816 21:41:39.026386    7436 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20210816214127-6487 NodeName:addons-20210816214127-6487 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/mini
kube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 21:41:39.026503    7436 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "addons-20210816214127-6487"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 21:41:39.026633    7436 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-20210816214127-6487 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:addons-20210816214127-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0816 21:41:39.026678    7436 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0816 21:41:39.035424    7436 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 21:41:39.035488    7436 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 21:41:39.041685    7436 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (557 bytes)
	I0816 21:41:39.052709    7436 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 21:41:39.063448    7436 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2067 bytes)
	I0816 21:41:39.075434    7436 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0816 21:41:39.077936    7436 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 21:41:39.086410    7436 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487 for IP: 192.168.49.2
	I0816 21:41:39.086448    7436 certs.go:183] generating minikubeCA CA: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 21:41:39.326324    7436 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt ...
	I0816 21:41:39.326353    7436 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt: {Name:mkd367ac7f37792b2149c8b0eeee6d183ce4a19d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 21:41:39.326551    7436 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key ...
	I0816 21:41:39.326570    7436 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key: {Name:mk92cd8b8cec7037a7977195782603699fa645ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 21:41:39.326683    7436 certs.go:183] generating proxyClientCA CA: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 21:41:39.454444    7436 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt ...
	I0816 21:41:39.454476    7436 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt: {Name:mk64226f5cfe53229345865b975b077862475ff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 21:41:39.454659    7436 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key ...
	I0816 21:41:39.454672    7436 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key: {Name:mk536afc41d19ebd7e928f73a27c2dfcb060bc26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 21:41:39.454785    7436 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/client.key
	I0816 21:41:39.454795    7436 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/client.crt with IP's: []
	I0816 21:41:39.616837    7436 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/client.crt ...
	I0816 21:41:39.616866    7436 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/client.crt: {Name:mk588bb6662dabbd6093d44f6373fc312467a51a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 21:41:39.617054    7436 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/client.key ...
	I0816 21:41:39.617073    7436 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/client.key: {Name:mk86a9e37cd5502494aa2b91120ebdf61514b147 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 21:41:39.617151    7436 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/apiserver.key.dd3b5fb2
	I0816 21:41:39.617161    7436 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0816 21:41:39.707224    7436 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/apiserver.crt.dd3b5fb2 ...
	I0816 21:41:39.707250    7436 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/apiserver.crt.dd3b5fb2: {Name:mk82058492a7dffd18c52d5d40f768e9b35c46fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 21:41:39.707431    7436 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/apiserver.key.dd3b5fb2 ...
	I0816 21:41:39.707443    7436 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/apiserver.key.dd3b5fb2: {Name:mke9386e4b5ca524a3cf9a516fee42eaeda42e38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 21:41:39.707513    7436 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/apiserver.crt
	I0816 21:41:39.707574    7436 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/apiserver.key
	I0816 21:41:39.707622    7436 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/proxy-client.key
	I0816 21:41:39.707630    7436 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/proxy-client.crt with IP's: []
	I0816 21:41:39.898498    7436 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/proxy-client.crt ...
	I0816 21:41:39.898526    7436 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/proxy-client.crt: {Name:mk859bb66a40aeacfd63a5bd152b99dfd5081013 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 21:41:39.898698    7436 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/proxy-client.key ...
	I0816 21:41:39.898710    7436 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/proxy-client.key: {Name:mkc95040687ad96e20dec4f5d573301527019746 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 21:41:39.898870    7436 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 21:41:39.898905    7436 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 21:41:39.898928    7436 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 21:41:39.898955    7436 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0816 21:41:39.899799    7436 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 21:41:39.916273    7436 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 21:41:39.931135    7436 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 21:41:39.945832    7436 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 21:41:39.960247    7436 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 21:41:39.974186    7436 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 21:41:39.988533    7436 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 21:41:40.002866    7436 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 21:41:40.017551    7436 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 21:41:40.032853    7436 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 21:41:40.043365    7436 ssh_runner.go:149] Run: openssl version
	I0816 21:41:40.052265    7436 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 21:41:40.060244    7436 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 21:41:40.062787    7436 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0816 21:41:40.062822    7436 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 21:41:40.066909    7436 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 21:41:40.073217    7436 kubeadm.go:390] StartCluster: {Name:addons-20210816214127-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210816214127-6487 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 21:41:40.073289    7436 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 21:41:40.073326    7436 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 21:41:40.095961    7436 cri.go:76] found id: ""
	I0816 21:41:40.096018    7436 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 21:41:40.102009    7436 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 21:41:40.107877    7436 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0816 21:41:40.107931    7436 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 21:41:40.113570    7436 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 21:41:40.113605    7436 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0816 21:42:00.965901    7436 out.go:204]   - Generating certificates and keys ...
	I0816 21:42:00.968720    7436 out.go:204]   - Booting up control plane ...
	I0816 21:42:00.971448    7436 out.go:204]   - Configuring RBAC rules ...
	I0816 21:42:00.973435    7436 cni.go:93] Creating CNI manager for ""
	I0816 21:42:00.973449    7436 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 21:42:00.974859    7436 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 21:42:00.974919    7436 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0816 21:42:00.978533    7436 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0816 21:42:00.978549    7436 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0816 21:42:00.990352    7436 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 21:42:01.323564    7436 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 21:42:01.323632    7436 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:42:01.323632    7436 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=addons-20210816214127-6487 minikube.k8s.io/updated_at=2021_08_16T21_42_01_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:42:01.436736    7436 ops.go:34] apiserver oom_adj: -16
	I0816 21:42:01.436746    7436 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:42:01.997281    7436 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:42:02.497093    7436 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:42:02.997036    7436 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:42:03.496764    7436 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:42:03.996816    7436 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:42:04.496722    7436 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:42:04.996940    7436 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:42:05.497317    7436 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:42:05.996732    7436 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:42:06.496700    7436 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:42:06.996768    7436 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:42:07.497215    7436 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:42:08.497341    7436 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:42:08.997053    7436 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:42:09.497103    7436 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:42:09.996760    7436 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:42:10.497674    7436 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:42:10.997061    7436 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:42:11.497520    7436 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:42:11.996728    7436 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:42:12.497508    7436 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:42:12.996794    7436 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:42:13.496804    7436 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:42:13.997645    7436 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:42:14.059406    7436 kubeadm.go:985] duration metric: took 12.735827798s to wait for elevateKubeSystemPrivileges.
	I0816 21:42:14.059442    7436 kubeadm.go:392] StartCluster complete in 33.986229847s
	I0816 21:42:14.059457    7436 settings.go:142] acquiring lock: {Name:mk71c9e00e0a208f5191d6b85d29a074b46503a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 21:42:14.059576    7436 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 21:42:14.059979    7436 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 21:42:14.574742    7436 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20210816214127-6487" rescaled to 1
	I0816 21:42:14.574799    7436 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 21:42:14.576884    7436 out.go:177] * Verifying Kubernetes components...
	I0816 21:42:14.576934    7436 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 21:42:14.574822    7436 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 21:42:14.574841    7436 addons.go:342] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver ingress helm-tiller]
	I0816 21:42:14.577051    7436 addons.go:59] Setting volumesnapshots=true in profile "addons-20210816214127-6487"
	I0816 21:42:14.577067    7436 addons.go:135] Setting addon volumesnapshots=true in "addons-20210816214127-6487"
	I0816 21:42:14.577076    7436 addons.go:59] Setting ingress=true in profile "addons-20210816214127-6487"
	I0816 21:42:14.577091    7436 host.go:66] Checking if "addons-20210816214127-6487" exists ...
	I0816 21:42:14.577098    7436 addons.go:135] Setting addon ingress=true in "addons-20210816214127-6487"
	I0816 21:42:14.577119    7436 addons.go:59] Setting olm=true in profile "addons-20210816214127-6487"
	I0816 21:42:14.577128    7436 host.go:66] Checking if "addons-20210816214127-6487" exists ...
	I0816 21:42:14.577139    7436 addons.go:135] Setting addon olm=true in "addons-20210816214127-6487"
	I0816 21:42:14.577161    7436 host.go:66] Checking if "addons-20210816214127-6487" exists ...
	I0816 21:42:14.577164    7436 addons.go:59] Setting metrics-server=true in profile "addons-20210816214127-6487"
	I0816 21:42:14.577180    7436 addons.go:135] Setting addon metrics-server=true in "addons-20210816214127-6487"
	I0816 21:42:14.577210    7436 host.go:66] Checking if "addons-20210816214127-6487" exists ...
	I0816 21:42:14.577411    7436 addons.go:59] Setting csi-hostpath-driver=true in profile "addons-20210816214127-6487"
	I0816 21:42:14.577466    7436 addons.go:135] Setting addon csi-hostpath-driver=true in "addons-20210816214127-6487"
	I0816 21:42:14.577496    7436 host.go:66] Checking if "addons-20210816214127-6487" exists ...
	I0816 21:42:14.577651    7436 cli_runner.go:115] Run: docker container inspect addons-20210816214127-6487 --format={{.State.Status}}
	I0816 21:42:14.577651    7436 cli_runner.go:115] Run: docker container inspect addons-20210816214127-6487 --format={{.State.Status}}
	I0816 21:42:14.577658    7436 cli_runner.go:115] Run: docker container inspect addons-20210816214127-6487 --format={{.State.Status}}
	I0816 21:42:14.577679    7436 cli_runner.go:115] Run: docker container inspect addons-20210816214127-6487 --format={{.State.Status}}
	I0816 21:42:14.577770    7436 addons.go:59] Setting default-storageclass=true in profile "addons-20210816214127-6487"
	I0816 21:42:14.577793    7436 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20210816214127-6487"
	I0816 21:42:14.577796    7436 addons.go:59] Setting registry=true in profile "addons-20210816214127-6487"
	I0816 21:42:14.577798    7436 addons.go:59] Setting storage-provisioner=true in profile "addons-20210816214127-6487"
	I0816 21:42:14.577809    7436 addons.go:135] Setting addon registry=true in "addons-20210816214127-6487"
	I0816 21:42:14.577827    7436 addons.go:59] Setting helm-tiller=true in profile "addons-20210816214127-6487"
	I0816 21:42:14.577839    7436 host.go:66] Checking if "addons-20210816214127-6487" exists ...
	I0816 21:42:14.577852    7436 addons.go:135] Setting addon helm-tiller=true in "addons-20210816214127-6487"
	I0816 21:42:14.577882    7436 host.go:66] Checking if "addons-20210816214127-6487" exists ...
	I0816 21:42:14.577811    7436 addons.go:135] Setting addon storage-provisioner=true in "addons-20210816214127-6487"
	W0816 21:42:14.577941    7436 addons.go:147] addon storage-provisioner should already be in state true
	I0816 21:42:14.577982    7436 cli_runner.go:115] Run: docker container inspect addons-20210816214127-6487 --format={{.State.Status}}
	I0816 21:42:14.575013    7436 config.go:177] Loaded profile config "addons-20210816214127-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 21:42:14.577982    7436 host.go:66] Checking if "addons-20210816214127-6487" exists ...
	I0816 21:42:14.578043    7436 cli_runner.go:115] Run: docker container inspect addons-20210816214127-6487 --format={{.State.Status}}
	I0816 21:42:14.578241    7436 cli_runner.go:115] Run: docker container inspect addons-20210816214127-6487 --format={{.State.Status}}
	I0816 21:42:14.578315    7436 cli_runner.go:115] Run: docker container inspect addons-20210816214127-6487 --format={{.State.Status}}
	I0816 21:42:14.578411    7436 cli_runner.go:115] Run: docker container inspect addons-20210816214127-6487 --format={{.State.Status}}
	I0816 21:42:14.661430    7436 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0816 21:42:14.663122    7436 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
	I0816 21:42:14.665450    7436 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0816 21:42:14.665534    7436 addons.go:275] installing /etc/kubernetes/addons/ingress-configmap.yaml
	I0816 21:42:14.665546    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-configmap.yaml (1865 bytes)
	I0816 21:42:14.665612    7436 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816214127-6487
	I0816 21:42:14.669708    7436 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	I0816 21:42:14.671279    7436 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	I0816 21:42:14.672777    7436 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	I0816 21:42:14.674296    7436 out.go:177]   - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	I0816 21:42:14.675828    7436 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	I0816 21:42:14.677377    7436 out.go:177]   - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	I0816 21:42:14.678880    7436 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	I0816 21:42:14.680265    7436 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	I0816 21:42:14.681630    7436 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
	I0816 21:42:14.681695    7436 addons.go:275] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0816 21:42:14.681707    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0816 21:42:14.681759    7436 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816214127-6487
	I0816 21:42:14.686852    7436 out.go:177]   - Using image quay.io/operator-framework/olm:v0.17.0
	I0816 21:42:14.688313    7436 out.go:177]   - Using image quay.io/operator-framework/upstream-community-operators:07bbc13
	I0816 21:42:14.687628    7436 node_ready.go:35] waiting up to 6m0s for node "addons-20210816214127-6487" to be "Ready" ...
	I0816 21:42:14.685702    7436 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 21:42:14.693250    7436 out.go:177]   - Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
	I0816 21:42:14.693278    7436 node_ready.go:49] node "addons-20210816214127-6487" has status "Ready":"True"
	I0816 21:42:14.693298    7436 node_ready.go:38] duration metric: took 3.800301ms waiting for node "addons-20210816214127-6487" to be "Ready" ...
	I0816 21:42:14.693308    7436 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 21:42:14.693309    7436 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 21:42:14.693317    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 21:42:14.692027    7436 addons.go:135] Setting addon default-storageclass=true in "addons-20210816214127-6487"
	W0816 21:42:14.693330    7436 addons.go:147] addon default-storageclass should already be in state true
	I0816 21:42:14.693357    7436 host.go:66] Checking if "addons-20210816214127-6487" exists ...
	I0816 21:42:14.693381    7436 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816214127-6487
	I0816 21:42:14.693865    7436 cli_runner.go:115] Run: docker container inspect addons-20210816214127-6487 --format={{.State.Status}}
	I0816 21:42:14.709869    7436 out.go:177]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	I0816 21:42:14.710078    7436 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0816 21:42:14.710094    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0816 21:42:14.710239    7436 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816214127-6487
	I0816 21:42:14.732227    7436 out.go:177]   - Using image gcr.io/kubernetes-helm/tiller:v2.16.12
	I0816 21:42:14.732329    7436 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0816 21:42:14.732344    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2433 bytes)
	I0816 21:42:14.732406    7436 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816214127-6487
	I0816 21:42:14.734108    7436 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-728qv" in "kube-system" namespace to be "Ready" ...
	I0816 21:42:14.738045    7436 out.go:177]   - Using image registry:2.7.1
	I0816 21:42:14.739479    7436 out.go:177]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0816 21:42:14.739576    7436 addons.go:275] installing /etc/kubernetes/addons/registry-rc.yaml
	I0816 21:42:14.739591    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
	I0816 21:42:14.739650    7436 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816214127-6487
	I0816 21:42:14.742541    7436 addons.go:275] installing /etc/kubernetes/addons/crds.yaml
	I0816 21:42:14.742570    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/crds.yaml (825331 bytes)
	I0816 21:42:14.742626    7436 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816214127-6487
	I0816 21:42:14.755894    7436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210816214127-6487/id_rsa Username:docker}
	I0816 21:42:14.758395    7436 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 21:42:14.758511    7436 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 21:42:14.758526    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 21:42:14.758587    7436 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816214127-6487
	I0816 21:42:14.778636    7436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210816214127-6487/id_rsa Username:docker}
	I0816 21:42:14.801932    7436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210816214127-6487/id_rsa Username:docker}
	I0816 21:42:14.810128    7436 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 21:42:14.810151    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 21:42:14.810198    7436 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816214127-6487
	I0816 21:42:14.826201    7436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210816214127-6487/id_rsa Username:docker}
	I0816 21:42:14.831573    7436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210816214127-6487/id_rsa Username:docker}
	I0816 21:42:14.846368    7436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210816214127-6487/id_rsa Username:docker}
	I0816 21:42:14.859599    7436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210816214127-6487/id_rsa Username:docker}
	I0816 21:42:14.863711    7436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210816214127-6487/id_rsa Username:docker}
	I0816 21:42:14.865115    7436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210816214127-6487/id_rsa Username:docker}
	I0816 21:42:14.934607    7436 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 21:42:14.934633    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
	I0816 21:42:15.017752    7436 addons.go:275] installing /etc/kubernetes/addons/ingress-rbac.yaml
	I0816 21:42:15.017773    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-rbac.yaml (6005 bytes)
	I0816 21:42:15.026954    7436 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 21:42:15.026975    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 21:42:15.030965    7436 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 21:42:15.030967    7436 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
	I0816 21:42:15.031026    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
	I0816 21:42:15.035476    7436 addons.go:275] installing /etc/kubernetes/addons/ingress-dp.yaml
	I0816 21:42:15.035493    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-dp.yaml (9394 bytes)
	I0816 21:42:15.043258    7436 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 21:42:15.043274    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 21:42:15.114268    7436 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml
	I0816 21:42:15.118349    7436 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0816 21:42:15.118372    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
	I0816 21:42:15.213647    7436 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0816 21:42:15.213675    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0816 21:42:15.219308    7436 addons.go:275] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0816 21:42:15.219376    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
	I0816 21:42:15.220534    7436 addons.go:275] installing /etc/kubernetes/addons/olm.yaml
	I0816 21:42:15.220583    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/olm.yaml (9882 bytes)
	I0816 21:42:15.222498    7436 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0816 21:42:15.222547    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0816 21:42:15.222875    7436 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 21:42:15.317943    7436 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0816 21:42:15.317977    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0816 21:42:15.317943    7436 addons.go:275] installing /etc/kubernetes/addons/registry-svc.yaml
	I0816 21:42:15.318027    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0816 21:42:15.332091    7436 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 21:42:15.336418    7436 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0816 21:42:15.336442    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0816 21:42:15.427131    7436 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0816 21:42:15.436293    7436 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0816 21:42:15.515602    7436 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0816 21:42:15.524884    7436 addons.go:275] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0816 21:42:15.524910    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
	I0816 21:42:15.525377    7436 addons.go:275] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0816 21:42:15.525394    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0816 21:42:15.532703    7436 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0816 21:42:15.532725    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
	I0816 21:42:15.635287    7436 addons.go:275] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0816 21:42:15.635317    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
	I0816 21:42:15.732696    7436 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0816 21:42:15.813314    7436 addons.go:275] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0816 21:42:15.813345    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
	I0816 21:42:15.826063    7436 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0816 21:42:15.826086    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
	I0816 21:42:16.022242    7436 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0816 21:42:16.022276    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
	I0816 21:42:16.026081    7436 addons.go:275] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 21:42:16.026102    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
	I0816 21:42:16.121127    7436 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0816 21:42:16.121157    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
	I0816 21:42:16.429849    7436 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 21:42:16.514961    7436 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	I0816 21:42:16.514997    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
	I0816 21:42:16.635787    7436 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0816 21:42:16.635813    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
	I0816 21:42:16.818450    7436 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	I0816 21:42:16.818475    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
	I0816 21:42:16.929060    7436 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0816 21:42:16.929088    7436 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0816 21:42:16.932965    7436 pod_ready.go:102] pod "coredns-558bd4d5db-728qv" in "kube-system" namespace has status "Ready":"False"
	I0816 21:42:17.214055    7436 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0816 21:42:17.913646    7436 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.882625524s)
	I0816 21:42:18.835165    7436 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: (3.720804063s)
	I0816 21:42:18.835200    7436 addons.go:313] Verifying addon ingress=true in "addons-20210816214127-6487"
	I0816 21:42:18.837317    7436 out.go:177] * Verifying ingress addon...
	I0816 21:42:18.835525    7436 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.612620567s)
	I0816 21:42:18.837455    7436 addons.go:313] Verifying addon metrics-server=true in "addons-20210816214127-6487"
	I0816 21:42:18.835569    7436 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.503444307s)
	I0816 21:42:18.840045    7436 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0816 21:42:18.924267    7436 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0816 21:42:18.924291    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:19.320715    7436 pod_ready.go:102] pod "coredns-558bd4d5db-728qv" in "kube-system" namespace has status "Ready":"False"
	I0816 21:42:19.440840    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:19.932852    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:20.439511    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:20.520625    7436 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (5.093453498s)
	W0816 21:42:20.520674    7436 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0816 21:42:20.520695    7436 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0816 21:42:20.520764    7436 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.084445975s)
	I0816 21:42:20.520821    7436 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.788100451s)
	I0816 21:42:20.520836    7436 addons.go:313] Verifying addon registry=true in "addons-20210816214127-6487"
	I0816 21:42:20.522513    7436 out.go:177] * Verifying registry addon...
	I0816 21:42:20.524836    7436 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0816 21:42:20.521233    7436 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.091343925s)
	W0816 21:42:20.525166    7436 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0816 21:42:20.525215    7436 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0816 21:42:20.621644    7436 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0816 21:42:20.621722    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:20.797436    7436 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0816 21:42:20.885787    7436 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 21:42:20.928009    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:21.132380    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:21.323434    7436 pod_ready.go:102] pod "coredns-558bd4d5db-728qv" in "kube-system" namespace has status "Ready":"False"
	I0816 21:42:21.429970    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:21.722616    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:21.913473    7436 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.699355479s)
	I0816 21:42:21.913511    7436 addons.go:313] Verifying addon csi-hostpath-driver=true in "addons-20210816214127-6487"
	I0816 21:42:21.915781    7436 out.go:177] * Verifying csi-hostpath-driver addon...
	I0816 21:42:21.918052    7436 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0816 21:42:21.924839    7436 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0816 21:42:21.924863    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:21.927795    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:22.126886    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:22.428684    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:22.429702    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:22.634221    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:22.930166    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:22.931025    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:23.125104    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:23.433126    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:23.434579    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:23.619004    7436 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (2.821531355s)
	I0816 21:42:23.619165    7436 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.733329452s)
	I0816 21:42:23.625666    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:23.754771    7436 pod_ready.go:102] pod "coredns-558bd4d5db-728qv" in "kube-system" namespace has status "Ready":"False"
	I0816 21:42:23.927728    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:23.928633    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:24.125702    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:24.427782    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:24.428852    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:24.625835    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:24.927675    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:24.929375    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:25.125366    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:25.427774    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:25.429259    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:25.625244    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:25.755732    7436 pod_ready.go:102] pod "coredns-558bd4d5db-728qv" in "kube-system" namespace has status "Ready":"False"
	I0816 21:42:25.927728    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:25.929255    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:26.125411    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:26.427788    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:26.429234    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:26.626643    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:26.928697    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:26.929593    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:27.125414    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:27.428336    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:27.429217    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:27.626076    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:27.927356    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:27.929413    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:28.125250    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:28.254409    7436 pod_ready.go:102] pod "coredns-558bd4d5db-728qv" in "kube-system" namespace has status "Ready":"False"
	I0816 21:42:28.427509    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:28.429525    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:28.625275    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:28.928561    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:28.929497    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:29.125312    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:29.428308    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:29.429496    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:29.626120    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:29.928679    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:29.929714    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:30.125164    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:30.254571    7436 pod_ready.go:102] pod "coredns-558bd4d5db-728qv" in "kube-system" namespace has status "Ready":"False"
	I0816 21:42:30.427699    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:30.429413    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:30.625367    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:30.928829    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:30.929910    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:31.126026    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:31.427456    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:31.429442    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:31.625000    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:31.928015    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:31.929023    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:32.125837    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:32.255227    7436 pod_ready.go:102] pod "coredns-558bd4d5db-728qv" in "kube-system" namespace has status "Ready":"False"
	I0816 21:42:32.428512    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:32.431195    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:32.625789    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:32.927765    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:32.928820    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:33.125762    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:33.428578    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:33.429904    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:33.625511    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:33.928676    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:33.929664    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:34.127180    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:34.255394    7436 pod_ready.go:102] pod "coredns-558bd4d5db-728qv" in "kube-system" namespace has status "Ready":"False"
	I0816 21:42:34.428723    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:34.429365    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:34.625205    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:34.928291    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:34.929286    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:35.126195    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:35.428072    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:35.429157    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:35.626003    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:35.928127    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:35.928978    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:36.125876    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:36.428574    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:36.429963    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:36.625863    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:36.754715    7436 pod_ready.go:102] pod "coredns-558bd4d5db-728qv" in "kube-system" namespace has status "Ready":"False"
	I0816 21:42:36.929304    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:36.930365    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:37.125202    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:37.427600    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:37.429719    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:37.625459    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:37.927721    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:37.929742    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:38.135583    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:38.428266    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:38.429781    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:38.625957    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:38.755007    7436 pod_ready.go:102] pod "coredns-558bd4d5db-728qv" in "kube-system" namespace has status "Ready":"False"
	I0816 21:42:38.928241    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:38.929143    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:39.125986    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:39.428350    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:39.429558    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:39.625336    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:39.927664    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:39.929522    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:40.125363    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:40.428480    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:40.429477    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:40.625653    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:40.927408    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:40.929985    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:41.125707    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:41.254584    7436 pod_ready.go:102] pod "coredns-558bd4d5db-728qv" in "kube-system" namespace has status "Ready":"False"
	I0816 21:42:41.427970    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:41.429741    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:41.625257    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:41.928277    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:41.929398    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:42.125145    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:42.428517    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:42.429875    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:42.625864    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:42.928028    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:42.929162    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:43.126139    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:43.255026    7436 pod_ready.go:102] pod "coredns-558bd4d5db-728qv" in "kube-system" namespace has status "Ready":"False"
	I0816 21:42:43.429383    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:43.430733    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:43.625222    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:43.927647    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:43.929759    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:44.124918    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:44.427982    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:44.429459    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:44.626122    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:44.927982    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:44.929076    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:45.125748    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:45.428408    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:45.429287    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:45.625933    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:45.755315    7436 pod_ready.go:102] pod "coredns-558bd4d5db-728qv" in "kube-system" namespace has status "Ready":"False"
	I0816 21:42:45.928551    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:45.929991    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:46.125374    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:46.427332    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:46.429085    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:46.626303    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:46.927721    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:46.929658    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:47.125803    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:47.429451    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:47.430466    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:47.626075    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:47.755381    7436 pod_ready.go:102] pod "coredns-558bd4d5db-728qv" in "kube-system" namespace has status "Ready":"False"
	I0816 21:42:47.927649    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:47.929499    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:48.125418    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:48.428612    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:48.429627    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:48.628564    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:48.928078    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:48.929912    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:49.125626    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:49.427772    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:49.429665    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:49.625918    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:49.927987    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:49.930324    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:50.125973    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:50.255163    7436 pod_ready.go:102] pod "coredns-558bd4d5db-728qv" in "kube-system" namespace has status "Ready":"False"
	I0816 21:42:50.428393    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:50.429718    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:50.625796    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:50.928614    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:50.929782    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:51.125512    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:51.428597    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:51.429698    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:51.625455    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:51.928922    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:51.930370    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:52.126659    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:52.255495    7436 pod_ready.go:102] pod "coredns-558bd4d5db-728qv" in "kube-system" namespace has status "Ready":"False"
	I0816 21:42:52.428766    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:52.430230    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:52.625831    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:52.928302    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:52.929517    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:53.216871    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:53.430522    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:53.432629    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:53.626753    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:53.935554    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:53.937464    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:54.126031    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:54.319214    7436 pod_ready.go:102] pod "coredns-558bd4d5db-728qv" in "kube-system" namespace has status "Ready":"False"
	I0816 21:42:54.430351    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:54.432477    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:54.631036    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:54.929166    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:54.930696    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:55.126689    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:55.428406    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:55.430051    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:55.626415    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:55.928652    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:55.930024    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:56.125972    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:56.428220    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:56.430196    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:56.626534    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:56.755509    7436 pod_ready.go:102] pod "coredns-558bd4d5db-728qv" in "kube-system" namespace has status "Ready":"False"
	I0816 21:42:56.934579    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:56.935113    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:57.125452    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:57.428288    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:57.429676    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:57.626814    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:57.928511    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:57.930161    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:58.127642    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:58.427832    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:58.429292    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:58.626609    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:58.820545    7436 pod_ready.go:102] pod "coredns-558bd4d5db-728qv" in "kube-system" namespace has status "Ready":"False"
	I0816 21:42:58.927627    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:58.930053    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:59.126081    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:59.428780    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:59.430237    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:42:59.625878    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:42:59.928387    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:42:59.929486    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:00.125975    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:00.428569    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:00.430105    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:00.629981    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:00.929189    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:00.929525    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:01.125731    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:01.255532    7436 pod_ready.go:102] pod "coredns-558bd4d5db-728qv" in "kube-system" namespace has status "Ready":"False"
	I0816 21:43:01.427980    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:01.430007    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:01.626372    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:01.928907    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:01.930288    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:02.126683    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:02.428484    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:02.430096    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:02.625931    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:02.928224    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:02.929528    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:03.125863    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:03.255665    7436 pod_ready.go:102] pod "coredns-558bd4d5db-728qv" in "kube-system" namespace has status "Ready":"False"
	I0816 21:43:03.427666    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:03.431328    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:03.625107    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:03.755217    7436 pod_ready.go:92] pod "coredns-558bd4d5db-728qv" in "kube-system" namespace has status "Ready":"True"
	I0816 21:43:03.755241    7436 pod_ready.go:81] duration metric: took 49.021108846s waiting for pod "coredns-558bd4d5db-728qv" in "kube-system" namespace to be "Ready" ...
	I0816 21:43:03.755250    7436 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-pcsdx" in "kube-system" namespace to be "Ready" ...
	I0816 21:43:03.756933    7436 pod_ready.go:97] error getting pod "coredns-558bd4d5db-pcsdx" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-pcsdx" not found
	I0816 21:43:03.756951    7436 pod_ready.go:81] duration metric: took 1.696869ms waiting for pod "coredns-558bd4d5db-pcsdx" in "kube-system" namespace to be "Ready" ...
	E0816 21:43:03.756959    7436 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-pcsdx" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-pcsdx" not found
	I0816 21:43:03.756965    7436 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20210816214127-6487" in "kube-system" namespace to be "Ready" ...
	I0816 21:43:03.760181    7436 pod_ready.go:92] pod "etcd-addons-20210816214127-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 21:43:03.760194    7436 pod_ready.go:81] duration metric: took 3.224101ms waiting for pod "etcd-addons-20210816214127-6487" in "kube-system" namespace to be "Ready" ...
	I0816 21:43:03.760205    7436 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20210816214127-6487" in "kube-system" namespace to be "Ready" ...
	I0816 21:43:03.763219    7436 pod_ready.go:92] pod "kube-apiserver-addons-20210816214127-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 21:43:03.763233    7436 pod_ready.go:81] duration metric: took 3.022086ms waiting for pod "kube-apiserver-addons-20210816214127-6487" in "kube-system" namespace to be "Ready" ...
	I0816 21:43:03.763241    7436 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20210816214127-6487" in "kube-system" namespace to be "Ready" ...
	I0816 21:43:03.768345    7436 pod_ready.go:92] pod "kube-controller-manager-addons-20210816214127-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 21:43:03.768358    7436 pod_ready.go:81] duration metric: took 5.112154ms waiting for pod "kube-controller-manager-addons-20210816214127-6487" in "kube-system" namespace to be "Ready" ...
	I0816 21:43:03.768366    7436 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xb7v4" in "kube-system" namespace to be "Ready" ...
	I0816 21:43:03.928117    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:03.929277    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:03.953517    7436 pod_ready.go:92] pod "kube-proxy-xb7v4" in "kube-system" namespace has status "Ready":"True"
	I0816 21:43:03.953533    7436 pod_ready.go:81] duration metric: took 185.161551ms waiting for pod "kube-proxy-xb7v4" in "kube-system" namespace to be "Ready" ...
	I0816 21:43:03.953542    7436 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20210816214127-6487" in "kube-system" namespace to be "Ready" ...
	I0816 21:43:04.125886    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:04.354471    7436 pod_ready.go:92] pod "kube-scheduler-addons-20210816214127-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 21:43:04.354493    7436 pod_ready.go:81] duration metric: took 400.944069ms waiting for pod "kube-scheduler-addons-20210816214127-6487" in "kube-system" namespace to be "Ready" ...
	I0816 21:43:04.354503    7436 pod_ready.go:38] duration metric: took 49.66118176s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 21:43:04.354523    7436 api_server.go:50] waiting for apiserver process to appear ...
	I0816 21:43:04.354572    7436 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 21:43:04.384781    7436 api_server.go:70] duration metric: took 49.80995822s to wait for apiserver process to appear ...
	I0816 21:43:04.384806    7436 api_server.go:86] waiting for apiserver healthz status ...
	I0816 21:43:04.384816    7436 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0816 21:43:04.388900    7436 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0816 21:43:04.389671    7436 api_server.go:139] control plane version: v1.21.3
	I0816 21:43:04.389692    7436 api_server.go:129] duration metric: took 4.880129ms to wait for apiserver health ...
	I0816 21:43:04.389702    7436 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 21:43:04.427864    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:04.429770    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:04.557656    7436 system_pods.go:59] 19 kube-system pods found
	I0816 21:43:04.557683    7436 system_pods.go:61] "coredns-558bd4d5db-728qv" [a70b13b3-7927-4cab-9dcc-bc7a19e703ae] Running
	I0816 21:43:04.557688    7436 system_pods.go:61] "csi-hostpath-attacher-0" [3a5ec919-730c-46b0-8da5-50845b178f51] Running
	I0816 21:43:04.557692    7436 system_pods.go:61] "csi-hostpath-provisioner-0" [0c26b998-57e4-4919-814c-ae3f6c5a7a6e] Running
	I0816 21:43:04.557700    7436 system_pods.go:61] "csi-hostpath-resizer-0" [08ac0c6c-152f-4b9d-972a-539bb35f078d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0816 21:43:04.557705    7436 system_pods.go:61] "csi-hostpath-snapshotter-0" [e038c7be-da46-4ea4-ad11-eb0dd05368e7] Running
	I0816 21:43:04.557712    7436 system_pods.go:61] "csi-hostpathplugin-0" [67f4c965-760d-4593-8b87-6b309ed0141b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe])
	I0816 21:43:04.557717    7436 system_pods.go:61] "etcd-addons-20210816214127-6487" [377ac80f-8f46-4162-b4f4-e7e897348b25] Running
	I0816 21:43:04.557722    7436 system_pods.go:61] "kindnet-465vt" [b7134712-1f27-42d1-bb24-f10457ffedfa] Running
	I0816 21:43:04.557727    7436 system_pods.go:61] "kube-apiserver-addons-20210816214127-6487" [046d73ab-c928-4ae0-bf58-45406b2d73a5] Running
	I0816 21:43:04.557732    7436 system_pods.go:61] "kube-controller-manager-addons-20210816214127-6487" [0f5d52fb-762f-47b6-ae8f-3e00d6e66c65] Running
	I0816 21:43:04.557742    7436 system_pods.go:61] "kube-proxy-xb7v4" [f7d3efb9-33a3-476c-8788-23e8cbfdf8b6] Running
	I0816 21:43:04.557746    7436 system_pods.go:61] "kube-scheduler-addons-20210816214127-6487" [f5692e5a-c74d-4a00-9652-fdcbb92c34b2] Running
	I0816 21:43:04.557751    7436 system_pods.go:61] "metrics-server-77c99ccb96-9s24l" [fecc4cb4-f61b-4298-91d0-1d3127525972] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 21:43:04.557759    7436 system_pods.go:61] "registry-d8gdr" [4ba89676-ca7d-4e37-b69e-ff37274f3367] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0816 21:43:04.557768    7436 system_pods.go:61] "registry-proxy-p4pwl" [b330dd6b-0c8f-447d-aafb-29c153c4385f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0816 21:43:04.557776    7436 system_pods.go:61] "snapshot-controller-989f9ddc8-4rn8p" [9e84d070-d557-422f-a76c-8228c8a2f600] Running
	I0816 21:43:04.557782    7436 system_pods.go:61] "snapshot-controller-989f9ddc8-krb6d" [4aaaae63-3926-4b88-9d7b-b5a2989482ae] Running
	I0816 21:43:04.557789    7436 system_pods.go:61] "storage-provisioner" [b9d06889-adec-4c7f-b41f-60b1f9cdea86] Running
	I0816 21:43:04.557794    7436 system_pods.go:61] "tiller-deploy-768d69497-wvx5x" [41a034c9-709c-45c2-ae5a-ea40334e7602] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0816 21:43:04.557801    7436 system_pods.go:74] duration metric: took 168.09498ms to wait for pod list to return data ...
	I0816 21:43:04.557812    7436 default_sa.go:34] waiting for default service account to be created ...
	I0816 21:43:04.625604    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:04.753861    7436 default_sa.go:45] found service account: "default"
	I0816 21:43:04.753884    7436 default_sa.go:55] duration metric: took 196.063344ms for default service account to be created ...
	I0816 21:43:04.753893    7436 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 21:43:04.928384    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:04.929915    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:04.959550    7436 system_pods.go:86] 19 kube-system pods found
	I0816 21:43:04.959580    7436 system_pods.go:89] "coredns-558bd4d5db-728qv" [a70b13b3-7927-4cab-9dcc-bc7a19e703ae] Running
	I0816 21:43:04.959589    7436 system_pods.go:89] "csi-hostpath-attacher-0" [3a5ec919-730c-46b0-8da5-50845b178f51] Running
	I0816 21:43:04.959595    7436 system_pods.go:89] "csi-hostpath-provisioner-0" [0c26b998-57e4-4919-814c-ae3f6c5a7a6e] Running
	I0816 21:43:04.959606    7436 system_pods.go:89] "csi-hostpath-resizer-0" [08ac0c6c-152f-4b9d-972a-539bb35f078d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0816 21:43:04.959615    7436 system_pods.go:89] "csi-hostpath-snapshotter-0" [e038c7be-da46-4ea4-ad11-eb0dd05368e7] Running
	I0816 21:43:04.959626    7436 system_pods.go:89] "csi-hostpathplugin-0" [67f4c965-760d-4593-8b87-6b309ed0141b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe])
	I0816 21:43:04.959635    7436 system_pods.go:89] "etcd-addons-20210816214127-6487" [377ac80f-8f46-4162-b4f4-e7e897348b25] Running
	I0816 21:43:04.959683    7436 system_pods.go:89] "kindnet-465vt" [b7134712-1f27-42d1-bb24-f10457ffedfa] Running
	I0816 21:43:04.959703    7436 system_pods.go:89] "kube-apiserver-addons-20210816214127-6487" [046d73ab-c928-4ae0-bf58-45406b2d73a5] Running
	I0816 21:43:04.959717    7436 system_pods.go:89] "kube-controller-manager-addons-20210816214127-6487" [0f5d52fb-762f-47b6-ae8f-3e00d6e66c65] Running
	I0816 21:43:04.959723    7436 system_pods.go:89] "kube-proxy-xb7v4" [f7d3efb9-33a3-476c-8788-23e8cbfdf8b6] Running
	I0816 21:43:04.959729    7436 system_pods.go:89] "kube-scheduler-addons-20210816214127-6487" [f5692e5a-c74d-4a00-9652-fdcbb92c34b2] Running
	I0816 21:43:04.959736    7436 system_pods.go:89] "metrics-server-77c99ccb96-9s24l" [fecc4cb4-f61b-4298-91d0-1d3127525972] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 21:43:04.959745    7436 system_pods.go:89] "registry-d8gdr" [4ba89676-ca7d-4e37-b69e-ff37274f3367] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0816 21:43:04.959758    7436 system_pods.go:89] "registry-proxy-p4pwl" [b330dd6b-0c8f-447d-aafb-29c153c4385f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0816 21:43:04.959766    7436 system_pods.go:89] "snapshot-controller-989f9ddc8-4rn8p" [9e84d070-d557-422f-a76c-8228c8a2f600] Running
	I0816 21:43:04.959776    7436 system_pods.go:89] "snapshot-controller-989f9ddc8-krb6d" [4aaaae63-3926-4b88-9d7b-b5a2989482ae] Running
	I0816 21:43:04.959782    7436 system_pods.go:89] "storage-provisioner" [b9d06889-adec-4c7f-b41f-60b1f9cdea86] Running
	I0816 21:43:04.959794    7436 system_pods.go:89] "tiller-deploy-768d69497-wvx5x" [41a034c9-709c-45c2-ae5a-ea40334e7602] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0816 21:43:04.959804    7436 system_pods.go:126] duration metric: took 205.905172ms to wait for k8s-apps to be running ...
	I0816 21:43:04.959816    7436 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 21:43:04.959860    7436 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 21:43:05.018568    7436 system_svc.go:56] duration metric: took 58.7439ms WaitForService to wait for kubelet.
	I0816 21:43:05.018634    7436 kubeadm.go:547] duration metric: took 50.443815094s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 21:43:05.018669    7436 node_conditions.go:102] verifying NodePressure condition ...
	I0816 21:43:05.126793    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:05.154822    7436 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 21:43:05.154852    7436 node_conditions.go:123] node cpu capacity is 8
	I0816 21:43:05.154869    7436 node_conditions.go:105] duration metric: took 136.19454ms to run NodePressure ...
	I0816 21:43:05.154882    7436 start.go:231] waiting for startup goroutines ...
	I0816 21:43:05.428233    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:05.430263    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:05.626077    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:05.928398    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:05.929982    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:06.126308    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:06.433434    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:06.436901    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:06.626464    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:06.927864    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:06.929407    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:07.126450    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:07.427944    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:07.429951    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:07.626733    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:07.928103    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:07.930928    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:08.126288    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:08.428713    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:08.431442    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:08.628354    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:08.928846    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:08.929778    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:09.126537    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:09.428872    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:09.430222    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:09.626339    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:09.928260    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:09.930963    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:10.126041    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:10.428871    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:10.429564    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:10.625522    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:10.929322    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:10.930596    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:11.126695    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:11.429153    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:11.431623    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:11.626485    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:11.928517    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:11.929544    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:12.126039    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:12.428244    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:12.429230    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:12.628895    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:12.927947    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:12.928924    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:13.125701    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:13.428097    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:13.429969    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:13.625964    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:13.928280    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:13.930565    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:14.125793    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:14.428668    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:14.429536    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:14.625582    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:14.927511    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:14.929937    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:15.126824    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:15.427537    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:15.429728    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:15.626318    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:15.935593    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:15.935877    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:16.135399    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:16.431413    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:16.433024    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:16.626059    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:16.928721    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:16.930804    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:17.125415    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:17.430287    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:17.431707    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:17.625446    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:17.928779    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:17.929770    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:18.125336    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:18.428120    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:18.429125    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:18.626176    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:18.928280    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:18.929187    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:19.125609    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:19.427756    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:19.436077    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:19.629972    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:19.928209    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:19.929456    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:20.125246    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:20.428168    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:20.429035    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:20.625707    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 21:43:20.927668    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:20.929240    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:21.125738    7436 kapi.go:108] duration metric: took 1m0.600900245s to wait for kubernetes.io/minikube-addons=registry ...
	I0816 21:43:21.428019    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:21.430611    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:21.928556    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:21.929836    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:22.428411    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:22.429530    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:22.928623    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:22.929989    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:23.428358    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:23.429335    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:23.928337    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:23.929669    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:24.427636    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:24.430485    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:24.927059    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:24.929230    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:25.428074    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:25.429005    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:25.927425    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:25.930106    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:26.428546    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:26.430101    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:26.927685    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:26.930694    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:27.429096    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:27.430239    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:27.931851    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:27.934511    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:28.533753    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:28.535170    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:28.934114    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:28.935520    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:29.429654    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:29.439749    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:29.930760    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:29.935107    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:30.433851    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:30.436928    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:30.931960    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:30.933352    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:31.922787    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:31.925264    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:31.933159    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:32.524084    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:32.524768    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:32.933947    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:32.934387    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:33.429941    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:33.431660    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:33.936901    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:33.937431    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:34.432012    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:34.432981    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:34.928130    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:34.935075    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:35.433600    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:35.433751    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:36.019091    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:36.021664    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:36.517841    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:36.528475    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:37.308020    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:37.308224    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:37.428265    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:37.431785    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:37.928351    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:37.930387    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:38.428539    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:38.430045    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:38.928435    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:38.929756    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:39.428221    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:39.429557    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:39.928298    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:39.929442    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:40.429249    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:40.430397    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:40.929528    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:40.930145    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:41.428077    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:41.429879    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:41.927938    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:41.930009    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:42.734878    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:42.735860    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:42.928285    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:42.929787    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:43.428152    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:43.429231    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:43.928444    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 21:43:43.929519    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:44.429005    7436 kapi.go:108] duration metric: took 1m25.588944352s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0816 21:43:44.430411    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:44.929728    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:45.429915    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:45.930671    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:46.429247    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:46.938559    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:47.429530    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:47.929827    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 21:43:48.429883    7436 kapi.go:108] duration metric: took 1m26.511831s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0816 21:43:48.431874    7436 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, helm-tiller, olm, volumesnapshots, registry, ingress, csi-hostpath-driver
	I0816 21:43:48.431895    7436 addons.go:344] enableAddons completed in 1m33.857060957s
	I0816 21:43:48.481938    7436 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0816 21:43:48.483973    7436 out.go:177] * Done! kubectl is now configured to use "addons-20210816214127-6487" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Mon 2021-08-16 21:41:32 UTC, end at Mon 2021-08-16 21:49:19 UTC. --
	Aug 16 21:49:02 addons-20210816214127-6487 crio[367]: time="2021-08-16 21:49:02.031480623Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-59b45fb494-tghx4 Namespace:ingress-nginx ID:3970462f3b530612f1bcd3343d335f4651d5e712ac46918752019796af93a7dd NetNS:/var/run/netns/30141b6c-c020-4e21-9b9d-ce0851be5e15 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Aug 16 21:49:02 addons-20210816214127-6487 crio[367]: time="2021-08-16 21:49:02.031639233Z" level=info msg="About to del CNI network kindnet (type=ptp)"
	Aug 16 21:49:02 addons-20210816214127-6487 crio[367]: time="2021-08-16 21:49:02.180823764Z" level=info msg="Removing container: 8567801b605084622a6db4c5af9f33c1329595b34cd102883e57577855be3e9e" id=0452586d-7006-4e7b-a875-c5a4fa5b61e4 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 16 21:49:02 addons-20210816214127-6487 crio[367]: time="2021-08-16 21:49:02.198008402Z" level=info msg="Removed container 8567801b605084622a6db4c5af9f33c1329595b34cd102883e57577855be3e9e: ingress-nginx/ingress-nginx-controller-59b45fb494-tghx4/controller" id=0452586d-7006-4e7b-a875-c5a4fa5b61e4 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 16 21:49:02 addons-20210816214127-6487 crio[367]: time="2021-08-16 21:49:02.258798134Z" level=info msg="Stopped pod sandbox: 3970462f3b530612f1bcd3343d335f4651d5e712ac46918752019796af93a7dd" id=b76e03d8-75c9-48da-aea3-b42109572d1d name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 16 21:49:03 addons-20210816214127-6487 crio[367]: time="2021-08-16 21:49:03.183335916Z" level=info msg="Stopping pod sandbox: 3970462f3b530612f1bcd3343d335f4651d5e712ac46918752019796af93a7dd" id=8949f858-2443-4b66-9524-ee71855e30f0 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 16 21:49:03 addons-20210816214127-6487 crio[367]: time="2021-08-16 21:49:03.183390615Z" level=info msg="Stopped pod sandbox (already stopped): 3970462f3b530612f1bcd3343d335f4651d5e712ac46918752019796af93a7dd" id=8949f858-2443-4b66-9524-ee71855e30f0 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 16 21:49:04 addons-20210816214127-6487 crio[367]: time="2021-08-16 21:49:04.185550014Z" level=info msg="Stopping pod sandbox: 3970462f3b530612f1bcd3343d335f4651d5e712ac46918752019796af93a7dd" id=67ff25f5-be72-404b-be29-e34a8cf376ed name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 16 21:49:04 addons-20210816214127-6487 crio[367]: time="2021-08-16 21:49:04.185607099Z" level=info msg="Stopped pod sandbox (already stopped): 3970462f3b530612f1bcd3343d335f4651d5e712ac46918752019796af93a7dd" id=67ff25f5-be72-404b-be29-e34a8cf376ed name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 16 21:49:09 addons-20210816214127-6487 crio[367]: time="2021-08-16 21:49:09.259114776Z" level=info msg="Removing container: 0a6e4183ecad7e3743b06ff27c35ab6520a74e41af5c5d5cfcce6a863baead0b" id=57c5b9ad-612c-4bf9-a562-fdbb62a4681d name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 16 21:49:09 addons-20210816214127-6487 crio[367]: time="2021-08-16 21:49:09.302575187Z" level=info msg="Removed container 0a6e4183ecad7e3743b06ff27c35ab6520a74e41af5c5d5cfcce6a863baead0b: ingress-nginx/ingress-nginx-admission-create-c9pwv/create" id=57c5b9ad-612c-4bf9-a562-fdbb62a4681d name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 16 21:49:09 addons-20210816214127-6487 crio[367]: time="2021-08-16 21:49:09.303647869Z" level=info msg="Removing container: 9206745b40afbd74cc79fd3407e2d509a0c8ba8b21a8e98588168c1bae07255c" id=3967ec69-3816-4a86-9575-e25bb91de9fd name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 16 21:49:09 addons-20210816214127-6487 crio[367]: time="2021-08-16 21:49:09.342166237Z" level=info msg="Removed container 9206745b40afbd74cc79fd3407e2d509a0c8ba8b21a8e98588168c1bae07255c: ingress-nginx/ingress-nginx-admission-patch-c6zdq/patch" id=3967ec69-3816-4a86-9575-e25bb91de9fd name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 16 21:49:09 addons-20210816214127-6487 crio[367]: time="2021-08-16 21:49:09.343531261Z" level=info msg="Stopping pod sandbox: 50259a4470e5079d60ffdd8d2cd95a105b9a2f8a3a5fd7c10de3900943c7c5c7" id=e6bd4b25-63aa-4620-91c2-3ae6b9446550 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 16 21:49:09 addons-20210816214127-6487 crio[367]: time="2021-08-16 21:49:09.343578014Z" level=info msg="Stopped pod sandbox (already stopped): 50259a4470e5079d60ffdd8d2cd95a105b9a2f8a3a5fd7c10de3900943c7c5c7" id=e6bd4b25-63aa-4620-91c2-3ae6b9446550 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 16 21:49:09 addons-20210816214127-6487 crio[367]: time="2021-08-16 21:49:09.343865002Z" level=info msg="Removing pod sandbox: 50259a4470e5079d60ffdd8d2cd95a105b9a2f8a3a5fd7c10de3900943c7c5c7" id=2beb984c-e736-4fca-aec5-6072360d4dd6 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Aug 16 21:49:09 addons-20210816214127-6487 crio[367]: time="2021-08-16 21:49:09.456047577Z" level=info msg="Removed pod sandbox: 50259a4470e5079d60ffdd8d2cd95a105b9a2f8a3a5fd7c10de3900943c7c5c7" id=2beb984c-e736-4fca-aec5-6072360d4dd6 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Aug 16 21:49:09 addons-20210816214127-6487 crio[367]: time="2021-08-16 21:49:09.456536839Z" level=info msg="Stopping pod sandbox: dc96591ae7d6717033b56a337315cab2bff4697fa77fb392e1ebd8ba940bed4c" id=a8b7d9e7-bca4-4315-b0c7-c1257684360f name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 16 21:49:09 addons-20210816214127-6487 crio[367]: time="2021-08-16 21:49:09.456580214Z" level=info msg="Stopped pod sandbox (already stopped): dc96591ae7d6717033b56a337315cab2bff4697fa77fb392e1ebd8ba940bed4c" id=a8b7d9e7-bca4-4315-b0c7-c1257684360f name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 16 21:49:09 addons-20210816214127-6487 crio[367]: time="2021-08-16 21:49:09.456828677Z" level=info msg="Removing pod sandbox: dc96591ae7d6717033b56a337315cab2bff4697fa77fb392e1ebd8ba940bed4c" id=5d1f86e3-8a6b-4513-8bb7-942681174d58 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Aug 16 21:49:09 addons-20210816214127-6487 crio[367]: time="2021-08-16 21:49:09.560052212Z" level=info msg="Removed pod sandbox: dc96591ae7d6717033b56a337315cab2bff4697fa77fb392e1ebd8ba940bed4c" id=5d1f86e3-8a6b-4513-8bb7-942681174d58 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Aug 16 21:49:09 addons-20210816214127-6487 crio[367]: time="2021-08-16 21:49:09.560571712Z" level=info msg="Stopping pod sandbox: 3970462f3b530612f1bcd3343d335f4651d5e712ac46918752019796af93a7dd" id=dbedc3c5-8536-4f22-949b-02a2f53d2363 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 16 21:49:09 addons-20210816214127-6487 crio[367]: time="2021-08-16 21:49:09.560617555Z" level=info msg="Stopped pod sandbox (already stopped): 3970462f3b530612f1bcd3343d335f4651d5e712ac46918752019796af93a7dd" id=dbedc3c5-8536-4f22-949b-02a2f53d2363 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 16 21:49:09 addons-20210816214127-6487 crio[367]: time="2021-08-16 21:49:09.560952347Z" level=info msg="Removing pod sandbox: 3970462f3b530612f1bcd3343d335f4651d5e712ac46918752019796af93a7dd" id=7e55508d-26eb-4aed-85fa-c4e6c501ca05 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Aug 16 21:49:09 addons-20210816214127-6487 crio[367]: time="2021-08-16 21:49:09.652033343Z" level=info msg="Removed pod sandbox: 3970462f3b530612f1bcd3343d335f4651d5e712ac46918752019796af93a7dd" id=7e55508d-26eb-4aed-85fa-c4e6c501ca05 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                           CREATED             STATE               NAME                      ATTEMPT             POD ID
	aa7e00b74ca05       9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5                                                                                4 minutes ago       Running             etcd-restore-operator     0                   7efa9adc50850
	89e127d0a62cb       9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5                                                                                4 minutes ago       Running             etcd-backup-operator      0                   7efa9adc50850
	26cdab7f84a71       quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b                                            4 minutes ago       Running             etcd-operator             0                   7efa9adc50850
	0a070d6d353b1       europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8   4 minutes ago       Running             private-image-eu          0                   1f3ba0619471f
	810fc644c8a28       us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8                4 minutes ago       Running             private-image             0                   2970b809d18d0
	761451c86e973       docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                               4 minutes ago       Running             busybox                   0                   c647e5566c570
	53074c98d4ebf       docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce                                                 4 minutes ago       Running             nginx                     0                   43db5e3fa5963
	6fd996b0259d6       quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607                                          5 minutes ago       Running             packageserver             0                   655375f0bc9d0
	a0d1f205a8cbd       quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607                                          5 minutes ago       Running             packageserver             0                   d70926ba9086f
	0260c538cf6fc       d5444025797471ee73017096cffe85f4b149d404a3ccfdd7391d6046b88bf8f2                                                                                6 minutes ago       Running             olm-operator              0                   3bcac73810851
	05b056f4bd098       quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0                 6 minutes ago       Running             registry-server           0                   80760135af3a8
	53157dcc71ed6       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899                                                                                6 minutes ago       Running             coredns                   0                   58daceb080a9e
	3ea60ec9c0af1       quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607                                          6 minutes ago       Running             catalog-operator          0                   1b59cd129b895
	e76238027a727       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                                6 minutes ago       Running             storage-provisioner       0                   49b8eb3fbe126
	92298ae6db37e       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92                                                                                7 minutes ago       Running             kube-proxy                0                   b13cc1c562f81
	7b7347ae8cce4       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb                                                                                7 minutes ago       Running             kindnet-cni               0                   53afacdd4347d
	b3151bb521490       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80                                                                                7 minutes ago       Running             kube-apiserver            0                   9a5e484704f9c
	74301aa4dcfac       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a                                                                                7 minutes ago       Running             kube-scheduler            0                   1b67aa96d3366
	87481973dbd72       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934                                                                                7 minutes ago       Running             etcd                      0                   368fdde2f3122
	68edc1fe8b7df       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9                                                                                7 minutes ago       Running             kube-controller-manager   0                   e7494f50ac92e
	
	* 
	* ==> coredns [53157dcc71ed6bd534b3e3170dc8233e23725493a6a845e37d4dc6f07183126a] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               addons-20210816214127-6487
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-20210816214127-6487
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
	                    minikube.k8s.io/name=addons-20210816214127-6487
	                    minikube.k8s.io/updated_at=2021_08_16T21_42_01_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-20210816214127-6487
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Aug 2021 21:41:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-20210816214127-6487
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Aug 2021 21:49:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Aug 2021 21:45:07 +0000   Mon, 16 Aug 2021 21:41:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Aug 2021 21:45:07 +0000   Mon, 16 Aug 2021 21:41:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Aug 2021 21:45:07 +0000   Mon, 16 Aug 2021 21:41:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Aug 2021 21:45:07 +0000   Mon, 16 Aug 2021 21:42:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-20210816214127-6487
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                01e11463-9f4e-42ba-ab9e-c8de6b666e2b
	  Boot ID:                    fb7b5690-fedc-46af-96ea-1f6e59faa09d
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                  ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  default                     nginx                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  default                     private-image-7ff9c8c74f-fm799                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  default                     private-image-eu-5956d58f9f-s2jmw                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  kube-system                 coredns-558bd4d5db-728qv                              100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     7m5s
	  kube-system                 etcd-addons-20210816214127-6487                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         7m13s
	  kube-system                 kindnet-465vt                                         100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      7m5s
	  kube-system                 kube-apiserver-addons-20210816214127-6487             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	  kube-system                 kube-controller-manager-addons-20210816214127-6487    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m13s
	  kube-system                 kube-proxy-xb7v4                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m5s
	  kube-system                 kube-scheduler-addons-20210816214127-6487             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m13s
	  kube-system                 storage-provisioner                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m2s
	  my-etcd                     etcd-operator-85cd4f54cd-vfp59                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  olm                         catalog-operator-75d496484d-wxwkk                     10m (0%!)(MISSING)      0 (0%!)(MISSING)      80Mi (0%!)(MISSING)        0 (0%!)(MISSING)         6m59s
	  olm                         olm-operator-859c88c96-bpxtv                          10m (0%!)(MISSING)      0 (0%!)(MISSING)      160Mi (0%!)(MISSING)       0 (0%!)(MISSING)         6m59s
	  olm                         operatorhubio-catalog-hr9q8                           10m (0%!)(MISSING)      0 (0%!)(MISSING)      50Mi (0%!)(MISSING)        0 (0%!)(MISSING)         6m21s
	  olm                         packageserver-5f7f778fc6-kz29t                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	  olm                         packageserver-5f7f778fc6-m6bsd                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                880m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             510Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 7m13s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m13s  kubelet     Node addons-20210816214127-6487 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m13s  kubelet     Node addons-20210816214127-6487 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m13s  kubelet     Node addons-20210816214127-6487 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m6s   kubelet     Node addons-20210816214127-6487 status is now: NodeReady
	  Normal  Starting                 7m1s   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: 72 26 ac 65 57 da e2 80 ae c5 ab e8 08 00        r&.eW.........
	[  +5.428347] IPv4: martian source 10.244.0.32 from 10.244.0.32, on dev veth34290955
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 56 f3 d7 33 37 a5 08 06        ......V..37...
	[  +1.400486] IPv4: martian source 10.244.0.33 from 10.244.0.33, on dev veth0cc5a3e8
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff ce 89 89 e3 c3 24 08 06        ...........$..
	[  +0.200259] IPv4: martian source 10.244.0.34 from 10.244.0.34, on dev veth575526ac
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 06 13 ca 61 09 1b 08 06        .........a....
	[Aug16 21:45] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000002] ll header: 00000000: 72 26 ac 65 57 da e2 80 ae c5 ab e8 08 00        r&.eW.........
	[ +33.017616] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 72 26 ac 65 57 da e2 80 ae c5 ab e8 08 00        r&.eW.........
	[Aug16 21:46] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000003] ll header: 00000000: 72 26 ac 65 57 da e2 80 ae c5 ab e8 08 00        r&.eW.........
	[  +1.027548] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 72 26 ac 65 57 da e2 80 ae c5 ab e8 08 00        r&.eW.........
	[  +2.015836] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000003] ll header: 00000000: 72 26 ac 65 57 da e2 80 ae c5 ab e8 08 00        r&.eW.........
	[  +4.063701] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000002] ll header: 00000000: 72 26 ac 65 57 da e2 80 ae c5 ab e8 08 00        r&.eW.........
	[  +8.191421] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000003] ll header: 00000000: 72 26 ac 65 57 da e2 80 ae c5 ab e8 08 00        r&.eW.........
	[Aug16 21:47] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 72 26 ac 65 57 da e2 80 ae c5 ab e8 08 00        r&.eW.........
	[ +33.533578] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000003] ll header: 00000000: 72 26 ac 65 57 da e2 80 ae c5 ab e8 08 00        r&.eW.........
	
	* 
	* ==> etcd [26cdab7f84a7189fb1b5df4d7a490ac2bbc36360ca1139867278d85a0def7602] <==
	* time="2021-08-16T21:45:00Z" level=info msg="etcd-operator Version: 0.9.4"
	time="2021-08-16T21:45:00Z" level=info msg="Git SHA: c8a1c64"
	time="2021-08-16T21:45:00Z" level=info msg="Go Version: go1.11.5"
	time="2021-08-16T21:45:00Z" level=info msg="Go OS/Arch: linux/amd64"
	E0816 21:45:00.580950       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-operator", GenerateName:"", Namespace:"my-etcd", SelfLink:"", UID:"4e72d06d-0c46-45ef-b738-75a89f046696", ResourceVersion:"1883", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764747100, loc:(*time.Location)(0x20d4640)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"etcd-operator-85cd4f54cd-vfp59\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-16T21:45:00Z\",\"renewTime\":\"2021-08-16T21:45:00Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not r
eport event: 'Normal' 'LeaderElection' 'etcd-operator-85cd4f54cd-vfp59 became leader'
	
	* 
	* ==> etcd [87481973dbd7277187b381760d7a06c5e532125d083633459dc678fce0417dfe] <==
	* 2021-08-16 21:45:19.461623 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:45:29.461701 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:45:39.461933 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:45:49.462210 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:45:59.461896 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:46:09.461658 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:46:19.461558 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:46:29.461525 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:46:39.462876 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:46:49.461429 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:46:59.461906 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:47:09.461723 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:47:19.461863 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:47:29.461564 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:47:39.462316 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:47:49.461683 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:47:59.462282 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:48:09.461703 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:48:19.461370 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:48:29.515273 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:48:39.462256 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:48:49.461374 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:48:59.462167 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:49:09.461729 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:49:19.461787 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> etcd [89e127d0a62cb83e3e98b0c62164a310daf7ec5b153ba89807ff0a7ad5cdb818] <==
	* time="2021-08-16T21:45:00Z" level=info msg="Go Version: go1.11.5"
	time="2021-08-16T21:45:00Z" level=info msg="Go OS/Arch: linux/amd64"
	time="2021-08-16T21:45:00Z" level=info msg="etcd-backup-operator Version: 0.9.4"
	time="2021-08-16T21:45:00Z" level=info msg="Git SHA: c8a1c64"
	E0816 21:45:00.775860       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-backup-operator", GenerateName:"", Namespace:"my-etcd", SelfLink:"", UID:"4f924690-4cfe-4f24-bfc5-9d7312236f58", ResourceVersion:"1887", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764747100, loc:(*time.Location)(0x25824c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"etcd-operator-85cd4f54cd-vfp59\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-16T21:45:00Z\",\"renewTime\":\"2021-08-16T21:45:00Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Wil
l not report event: 'Normal' 'LeaderElection' 'etcd-operator-85cd4f54cd-vfp59 became leader'
	time="2021-08-16T21:45:00Z" level=info msg="starting backup controller" pkg=controller
	
	* 
	* ==> etcd [aa7e00b74ca051f0d438dd4f5c35eac0e5cfb05f04d866de61a70f43a60d996e] <==
	* time="2021-08-16T21:45:01Z" level=info msg="Go Version: go1.11.5"
	time="2021-08-16T21:45:01Z" level=info msg="Go OS/Arch: linux/amd64"
	time="2021-08-16T21:45:01Z" level=info msg="etcd-restore-operator Version: 0.9.4"
	time="2021-08-16T21:45:01Z" level=info msg="Git SHA: c8a1c64"
	E0816 21:45:01.105594       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-restore-operator", GenerateName:"", Namespace:"my-etcd", SelfLink:"", UID:"da4fa530-a874-4540-91b8-8ea9002b767c", ResourceVersion:"1895", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764747101, loc:(*time.Location)(0x24e11a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"etcd-operator-alm-owned"}, Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"etcd-operator-85cd4f54cd-vfp59\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-16T21:45:01Z\",\"renewTime\":\"2021-08-16T21:45:01Z\",\"leaderTransitions\":1}", "endpoints.kubernetes.io/last-change-trigger-time":"2021-08-16T21:45:01Z"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), Cl
usterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'etcd-operator-85cd4f54cd-vfp59 became leader'
	time="2021-08-16T21:45:01Z" level=info msg="listening on 0.0.0.0:19999"
	time="2021-08-16T21:45:01Z" level=info msg="starting restore controller" pkg=controller
	
	* 
	* ==> kernel <==
	*  21:49:19 up 28 min,  0 users,  load average: 0.10, 0.49, 0.35
	Linux addons-20210816214127-6487 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [b3151bb521490281fa973c2e2e21df4ebeddb0ec3b796eccbb4bba2c2a1abf8f] <==
	* I0816 21:45:02.920063       1 client.go:360] parsed scheme: "passthrough"
	I0816 21:45:02.920108       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 21:45:02.920116       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0816 21:45:14.728468       1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
	W0816 21:45:14.747810       1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
	W0816 21:45:14.833948       1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
	I0816 21:45:45.290021       1 client.go:360] parsed scheme: "passthrough"
	I0816 21:45:45.290062       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 21:45:45.290069       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0816 21:46:19.373345       1 client.go:360] parsed scheme: "passthrough"
	I0816 21:46:19.373382       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 21:46:19.373390       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0816 21:46:52.742157       1 client.go:360] parsed scheme: "passthrough"
	I0816 21:46:52.742197       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 21:46:52.742206       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0816 21:47:29.057184       1 client.go:360] parsed scheme: "passthrough"
	I0816 21:47:29.057230       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 21:47:29.057238       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0816 21:48:03.254174       1 client.go:360] parsed scheme: "passthrough"
	I0816 21:48:03.254214       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 21:48:03.254222       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0816 21:48:42.966435       1 client.go:360] parsed scheme: "passthrough"
	I0816 21:48:42.966473       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 21:48:42.966484       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0816 21:49:00.134565       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	* 
	* ==> kube-controller-manager [68edc1fe8b7df60b9f716c1d937de7c6467d36c2607de6526b79532175ad5980] <==
	* E0816 21:45:19.006072       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 21:45:19.289699       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 21:45:21.514362       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 21:45:23.598225       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 21:45:24.679204       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 21:45:31.996570       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 21:45:33.803865       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 21:45:36.045779       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 21:45:49.648687       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 21:45:51.678815       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 21:46:00.836190       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 21:46:18.036995       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 21:46:21.388573       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 21:46:31.109550       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 21:47:06.729631       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 21:47:07.797718       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 21:47:20.993876       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 21:47:38.671100       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 21:47:47.061753       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 21:47:59.863036       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 21:48:21.079755       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 21:48:35.985576       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 21:48:38.545554       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 21:49:06.232026       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 21:49:17.909118       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [92298ae6db37e3f46f9da10fb5d718583aaa0921aee403762657bc3dcc3fdf79] <==
	* I0816 21:42:18.048851       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0816 21:42:18.048912       1 server_others.go:140] Detected node IP 192.168.49.2
	W0816 21:42:18.048942       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0816 21:42:18.227077       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0816 21:42:18.227594       1 server_others.go:212] Using iptables Proxier.
	I0816 21:42:18.227619       1 server_others.go:219] creating dualStackProxier for iptables.
	W0816 21:42:18.227632       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0816 21:42:18.227984       1 server.go:643] Version: v1.21.3
	I0816 21:42:18.228917       1 config.go:315] Starting service config controller
	I0816 21:42:18.228930       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 21:42:18.228949       1 config.go:224] Starting endpoint slice config controller
	I0816 21:42:18.228953       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0816 21:42:18.236698       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0816 21:42:18.312886       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 21:42:18.329535       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0816 21:42:18.329542       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [74301aa4dcfaca390cb13ba808435bd4f75bbe637482d27db59340f5417279da] <==
	* W0816 21:41:57.916415       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 21:41:57.916458       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 21:41:57.916503       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 21:41:57.931089       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 21:41:57.931131       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 21:41:57.931422       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0816 21:41:57.931460       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0816 21:41:57.932361       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 21:41:57.933830       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 21:41:57.933869       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 21:41:57.933866       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 21:41:57.933982       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 21:41:57.934035       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 21:41:57.934059       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 21:41:57.934066       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 21:41:57.934172       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 21:41:57.934177       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 21:41:57.934194       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 21:41:57.934251       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 21:41:57.934265       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 21:41:57.934343       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 21:41:58.900936       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 21:41:58.949814       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 21:41:59.013020       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0816 21:42:02.131365       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 21:41:32 UTC, end at Mon 2021-08-16 21:49:20 UTC. --
	Aug 16 21:48:30 addons-20210816214127-6487 kubelet[1566]: I0816 21:48:30.212581    1566 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-eu-5956d58f9f-s2jmw" secret="" err="secret \"gcp-auth\" not found"
	Aug 16 21:48:38 addons-20210816214127-6487 kubelet[1566]: E0816 21:48:38.463502    1566 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/faf0978ccbf3899282c5e6b119a043e3e3a2286a841759f21fd9355ac5645250/docker/faf0978ccbf3899282c5e6b119a043e3e3a2286a841759f21fd9355ac5645250\": RecentStats: unable to find data in memory cache]"
	Aug 16 21:48:41 addons-20210816214127-6487 kubelet[1566]: W0816 21:48:41.976008    1566 container.go:586] Failed to update stats for container "/docker/faf0978ccbf3899282c5e6b119a043e3e3a2286a841759f21fd9355ac5645250/docker/faf0978ccbf3899282c5e6b119a043e3e3a2286a841759f21fd9355ac5645250": /sys/fs/cgroup/cpuset/docker/faf0978ccbf3899282c5e6b119a043e3e3a2286a841759f21fd9355ac5645250/docker/faf0978ccbf3899282c5e6b119a043e3e3a2286a841759f21fd9355ac5645250/cpuset.cpus found to be empty, continuing to push stats
	Aug 16 21:48:48 addons-20210816214127-6487 kubelet[1566]: E0816 21:48:48.577347    1566 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/faf0978ccbf3899282c5e6b119a043e3e3a2286a841759f21fd9355ac5645250/docker/faf0978ccbf3899282c5e6b119a043e3e3a2286a841759f21fd9355ac5645250\": RecentStats: unable to find data in memory cache]"
	Aug 16 21:48:50 addons-20210816214127-6487 kubelet[1566]: E0816 21:48:50.730938    1566 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-59b45fb494-tghx4.169be7c6adc6822d", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-59b45fb494-tghx4", UID:"7a09bd11-f8af-4bf7-b55d-d9446cc0bc62", APIVersion:"v1", ResourceVersion:"596", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppin
g container controller", Source:v1.EventSource{Component:"kubelet", Host:"addons-20210816214127-6487"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc03ed530ab7c6e2d, ext:409961043971, loc:(*time.Location)(0x74c3600)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc03ed530ab7c6e2d, ext:409961043971, loc:(*time.Location)(0x74c3600)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-59b45fb494-tghx4.169be7c6adc6822d" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 16 21:48:58 addons-20210816214127-6487 kubelet[1566]: E0816 21:48:58.526394    1566 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-59b45fb494-tghx4.169be7c87e5e43ed", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-59b45fb494-tghx4", UID:"7a09bd11-f8af-4bf7-b55d-d9446cc0bc62", APIVersion:"v1", ResourceVersion:"596", FieldPath:"spec.containers{controller}"}, Reason:"Unhealthy", Message:"Readi
ness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"addons-20210816214127-6487"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc03ed5329f3ddfed, ext:417755617721, loc:(*time.Location)(0x74c3600)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc03ed5329f3ddfed, ext:417755617721, loc:(*time.Location)(0x74c3600)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-59b45fb494-tghx4.169be7c87e5e43ed" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 16 21:48:58 addons-20210816214127-6487 kubelet[1566]: E0816 21:48:58.527493    1566 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-59b45fb494-tghx4.169be7c87e5e5043", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-59b45fb494-tghx4", UID:"7a09bd11-f8af-4bf7-b55d-d9446cc0bc62", APIVersion:"v1", ResourceVersion:"596", FieldPath:"spec.containers{controller}"}, Reason:"Unhealthy", Message:"Liven
ess probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"addons-20210816214127-6487"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc03ed5329f3dec43, ext:417755620876, loc:(*time.Location)(0x74c3600)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc03ed5329f3dec43, ext:417755620876, loc:(*time.Location)(0x74c3600)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-59b45fb494-tghx4.169be7c87e5e5043" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 16 21:48:58 addons-20210816214127-6487 kubelet[1566]: E0816 21:48:58.686124    1566 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/faf0978ccbf3899282c5e6b119a043e3e3a2286a841759f21fd9355ac5645250/docker/faf0978ccbf3899282c5e6b119a043e3e3a2286a841759f21fd9355ac5645250\": RecentStats: unable to find data in memory cache]"
	Aug 16 21:49:02 addons-20210816214127-6487 kubelet[1566]: I0816 21:49:02.179828    1566 scope.go:111] "RemoveContainer" containerID="8567801b605084622a6db4c5af9f33c1329595b34cd102883e57577855be3e9e"
	Aug 16 21:49:03 addons-20210816214127-6487 kubelet[1566]: I0816 21:49:03.294930    1566 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdvgg\" (UniqueName: \"kubernetes.io/projected/7a09bd11-f8af-4bf7-b55d-d9446cc0bc62-kube-api-access-pdvgg\") pod \"7a09bd11-f8af-4bf7-b55d-d9446cc0bc62\" (UID: \"7a09bd11-f8af-4bf7-b55d-d9446cc0bc62\") "
	Aug 16 21:49:03 addons-20210816214127-6487 kubelet[1566]: I0816 21:49:03.294990    1566 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a09bd11-f8af-4bf7-b55d-d9446cc0bc62-webhook-cert\") pod \"7a09bd11-f8af-4bf7-b55d-d9446cc0bc62\" (UID: \"7a09bd11-f8af-4bf7-b55d-d9446cc0bc62\") "
	Aug 16 21:49:03 addons-20210816214127-6487 kubelet[1566]: I0816 21:49:03.316235    1566 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a09bd11-f8af-4bf7-b55d-d9446cc0bc62-kube-api-access-pdvgg" (OuterVolumeSpecName: "kube-api-access-pdvgg") pod "7a09bd11-f8af-4bf7-b55d-d9446cc0bc62" (UID: "7a09bd11-f8af-4bf7-b55d-d9446cc0bc62"). InnerVolumeSpecName "kube-api-access-pdvgg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 16 21:49:03 addons-20210816214127-6487 kubelet[1566]: I0816 21:49:03.320219    1566 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a09bd11-f8af-4bf7-b55d-d9446cc0bc62-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7a09bd11-f8af-4bf7-b55d-d9446cc0bc62" (UID: "7a09bd11-f8af-4bf7-b55d-d9446cc0bc62"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 16 21:49:03 addons-20210816214127-6487 kubelet[1566]: I0816 21:49:03.396218    1566 reconciler.go:319] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a09bd11-f8af-4bf7-b55d-d9446cc0bc62-webhook-cert\") on node \"addons-20210816214127-6487\" DevicePath \"\""
	Aug 16 21:49:03 addons-20210816214127-6487 kubelet[1566]: I0816 21:49:03.396250    1566 reconciler.go:319] "Volume detached for volume \"kube-api-access-pdvgg\" (UniqueName: \"kubernetes.io/projected/7a09bd11-f8af-4bf7-b55d-d9446cc0bc62-kube-api-access-pdvgg\") on node \"addons-20210816214127-6487\" DevicePath \"\""
	Aug 16 21:49:08 addons-20210816214127-6487 kubelet[1566]: W0816 21:49:08.766141    1566 conversion.go:111] Could not get instant cpu stats: cumulative stats decrease
	Aug 16 21:49:08 addons-20210816214127-6487 kubelet[1566]: W0816 21:49:08.775075    1566 conversion.go:111] Could not get instant cpu stats: cumulative stats decrease
	Aug 16 21:49:08 addons-20210816214127-6487 kubelet[1566]: E0816 21:49:08.821340    1566 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/faf0978ccbf3899282c5e6b119a043e3e3a2286a841759f21fd9355ac5645250/docker/faf0978ccbf3899282c5e6b119a043e3e3a2286a841759f21fd9355ac5645250\": RecentStats: unable to find data in memory cache]"
	Aug 16 21:49:09 addons-20210816214127-6487 kubelet[1566]: I0816 21:49:09.258208    1566 scope.go:111] "RemoveContainer" containerID="0a6e4183ecad7e3743b06ff27c35ab6520a74e41af5c5d5cfcce6a863baead0b"
	Aug 16 21:49:09 addons-20210816214127-6487 kubelet[1566]: I0816 21:49:09.302820    1566 scope.go:111] "RemoveContainer" containerID="9206745b40afbd74cc79fd3407e2d509a0c8ba8b21a8e98588168c1bae07255c"
	Aug 16 21:49:18 addons-20210816214127-6487 kubelet[1566]: E0816 21:49:18.931448    1566 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/faf0978ccbf3899282c5e6b119a043e3e3a2286a841759f21fd9355ac5645250/docker/faf0978ccbf3899282c5e6b119a043e3e3a2286a841759f21fd9355ac5645250\": RecentStats: unable to find data in memory cache]"
	Aug 16 21:49:18 addons-20210816214127-6487 kubelet[1566]: E0816 21:49:18.952101    1566 cadvisor_stats_provider.go:147] "Unable to fetch pod log stats" err="open /var/log/pods/ingress-nginx_ingress-nginx-admission-create-c9pwv_deebb947-63ac-46ac-a67e-e6cafe37f501: no such file or directory" pod="ingress-nginx/ingress-nginx-admission-create-c9pwv"
	Aug 16 21:49:18 addons-20210816214127-6487 kubelet[1566]: E0816 21:49:18.953398    1566 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/deebb947-63ac-46ac-a67e-e6cafe37f501/etc-hosts with error exit status 1" pod="ingress-nginx/ingress-nginx-admission-create-c9pwv"
	Aug 16 21:49:18 addons-20210816214127-6487 kubelet[1566]: E0816 21:49:18.972794    1566 cadvisor_stats_provider.go:147] "Unable to fetch pod log stats" err="open /var/log/pods/ingress-nginx_ingress-nginx-admission-patch-c6zdq_e587eb73-fd06-4a03-9476-988cecc647aa: no such file or directory" pod="ingress-nginx/ingress-nginx-admission-patch-c6zdq"
	Aug 16 21:49:18 addons-20210816214127-6487 kubelet[1566]: E0816 21:49:18.974264    1566 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/e587eb73-fd06-4a03-9476-988cecc647aa/etc-hosts with error exit status 1" pod="ingress-nginx/ingress-nginx-admission-patch-c6zdq"
	
	* 
	* ==> storage-provisioner [e76238027a727f5a19e2773ce2469454cfdedae5e7802f33839e4c842a400759] <==
	* I0816 21:42:20.623996       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 21:42:20.716149       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 21:42:20.716224       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 21:42:20.730961       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 21:42:20.731255       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d22a4dd7-a589-4595-a7f5-52f84c4af99f", APIVersion:"v1", ResourceVersion:"757", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20210816214127-6487_d8aa5f1d-d79e-417b-9786-5ca5048fff3d became leader
	I0816 21:42:20.731351       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20210816214127-6487_d8aa5f1d-d79e-417b-9786-5ca5048fff3d!
	I0816 21:42:20.832397       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20210816214127-6487_d8aa5f1d-d79e-417b-9786-5ca5048fff3d!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20210816214127-6487 -n addons-20210816214127-6487
helpers_test.go:262: (dbg) Run:  kubectl --context addons-20210816214127-6487 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context addons-20210816214127-6487 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context addons-20210816214127-6487 describe pod : exit status 1 (46.792722ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context addons-20210816214127-6487 describe pod : exit status 1
--- FAIL: TestAddons/parallel/Ingress (303.43s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (2.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210816215712-6487 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210816215712-6487 -- exec busybox-84b6686758-lw52x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:537: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210816215712-6487 -- exec busybox-84b6686758-lw52x -- sh -c "ping -c 1 192.168.49.1"
multinode_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20210816215712-6487 -- exec busybox-84b6686758-lw52x -- sh -c "ping -c 1 192.168.49.1": exit status 1 (184.26053ms)

                                                
                                                
-- stdout --
	PING 192.168.49.1 (192.168.49.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:538: Failed to ping host (192.168.49.1) from pod (busybox-84b6686758-lw52x): exit status 1
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210816215712-6487 -- exec busybox-84b6686758-v4kzv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:537: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210816215712-6487 -- exec busybox-84b6686758-v4kzv -- sh -c "ping -c 1 192.168.49.1"
multinode_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20210816215712-6487 -- exec busybox-84b6686758-v4kzv -- sh -c "ping -c 1 192.168.49.1": exit status 1 (189.27306ms)

                                                
                                                
-- stdout --
	PING 192.168.49.1 (192.168.49.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:538: Failed to ping host (192.168.49.1) from pod (busybox-84b6686758-v4kzv): exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect multinode-20210816215712-6487
helpers_test.go:236: (dbg) docker inspect multinode-20210816215712-6487:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1f5cc52eda7fb88b119f22928af5c6f8d89c12e3db9356cd5a372832f7528cc3",
	        "Created": "2021-08-16T21:57:14.15116066Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 72177,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-16T21:57:14.604288559Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/1f5cc52eda7fb88b119f22928af5c6f8d89c12e3db9356cd5a372832f7528cc3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1f5cc52eda7fb88b119f22928af5c6f8d89c12e3db9356cd5a372832f7528cc3/hostname",
	        "HostsPath": "/var/lib/docker/containers/1f5cc52eda7fb88b119f22928af5c6f8d89c12e3db9356cd5a372832f7528cc3/hosts",
	        "LogPath": "/var/lib/docker/containers/1f5cc52eda7fb88b119f22928af5c6f8d89c12e3db9356cd5a372832f7528cc3/1f5cc52eda7fb88b119f22928af5c6f8d89c12e3db9356cd5a372832f7528cc3-json.log",
	        "Name": "/multinode-20210816215712-6487",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-20210816215712-6487:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-20210816215712-6487",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8149d85f63f62d52dd60db3b171b8c6ea2c3b9716491fe0cbb3b064b5fb31f01-init/diff:/var/lib/docker/overlay2/26ccb05b45c39eb8fb9a222c35182ae0b2281818311893cc1c820d93f21711ae/diff:/var/lib/docker/overlay2/cd717b1332cc33945d2b9d777346faa34d6bdfa47023daf36a4373532eccf421/diff:/var/lib/docker/overlay2/d9b45e6cc99a52e8f81dfe312587e9103ceac6c138634c90ec0d647cd90dcdc6/diff:/var/lib/docker/overlay2/253a269adcaa7871d92fb351ceb4d32421104cd62f741ba1c63d8e3f2e19a0b5/diff:/var/lib/docker/overlay2/42adedc2a706910fdba591e62cbfab57dd03d85b882f6eef38c3e131a0c52bea/diff:/var/lib/docker/overlay2/77ed2008192a16e91da5c6b9ccc6e71cea338a7cb24d331dc695e5f74fb04573/diff:/var/lib/docker/overlay2/29dccf868a296ce0889c929159fd3ac0dcdc00ba6e2ef864b056335954ffab4d/diff:/var/lib/docker/overlay2/592818f0a9ab8c99c2438562d9c5a479083cb08f41ad30ddefa9b493a8098150/diff:/var/lib/docker/overlay2/05fa44df096fbdb60f935bef69e952369e10cd8f1adc570c170c1f08d09743f7/diff:/var/lib/docker/overlay2/65cf48
1c471f231a663edc6331272317a851b0fd3395b2d6be43899f9b5443f9/diff:/var/lib/docker/overlay2/86a49202a7a2abea1a7a57c52f2e19cf991e9319b11dcf7f4105892413e0096e/diff:/var/lib/docker/overlay2/6f1a2a1805e90af512eef163f484935d57b0a0a72a65b5775c38efb5fe937d13/diff:/var/lib/docker/overlay2/f514eb992d78367550d2ce1c020771f984eff047302d6af3dcd71172270c0087/diff:/var/lib/docker/overlay2/cdae383a0302a2ed835985028caad813d6e16c7caf0e146005b47a4a1e94369b/diff:/var/lib/docker/overlay2/1fe02987c4f18db439e026431c44d1b8b233da766c4dcd331a4c20fd91e88748/diff:/var/lib/docker/overlay2/28bb6b828668bb682f3523f9f71fbd429ef6180cd98455625ea014ef4e532b77/diff:/var/lib/docker/overlay2/2878ae01f9dccb219097c2fc5873be677fe8a70e2bd936d25f018483a3d6c1a3/diff:/var/lib/docker/overlay2/da86efa9bad2d8b46c88440b1c4864add391f7cc9ee07d4227dd37d76c9ffd85/diff:/var/lib/docker/overlay2/93bf17ee1f2be70c70aa38b3aea9daa3a62e189181d1aa56acc52f5cf90f7436/diff:/var/lib/docker/overlay2/e8db603cd7ef789cc310bf7782375ee624d80ac0bceb3a3075900e67540c152a/diff:/var/lib/d
ocker/overlay2/1ba1c50d72480f49b835b34db6c3b21bccdf6530a8201631099a7e287dbf94a6/diff:/var/lib/docker/overlay2/2eeb0aecf508bc4a797d240b330bfa5b54b84a085737a96dad9faf9ea129d4ce/diff:/var/lib/docker/overlay2/40ba0dc970df76cce520350594f86384e121b47a190106559752e8f7cf138540/diff:/var/lib/docker/overlay2/8b80e2d0e53fb136919fd9ca7d4d198a92c324a19aa1827fcd672a3aa49a0dcc/diff:/var/lib/docker/overlay2/bd1d10c9a8876242ad61e3841c439177c3ae3a1967e5b053729107676134c622/diff:/var/lib/docker/overlay2/1f65ee5880c0ce6e027f2e72118cb956fcdcba9bb0806d98de6532bcbdfac6dd/diff:/var/lib/docker/overlay2/8bde131df5f26fa8ad65d4ddcb9ddb84de567427ffeb4aee76487489e530ef77/diff:/var/lib/docker/overlay2/73a68cda04f65298e5bd2581f0f16a8de96cd42145968d5c39b0c8ae3228a423/diff:/var/lib/docker/overlay2/0acfb650c0dfd3b9df005fe48f305b155d61d21914271c258e95bf42636e9bf8/diff:/var/lib/docker/overlay2/33e44b42ef88ffe285cb67e240248a629167dd1c83362d51a67679cbd866c30b/diff:/var/lib/docker/overlay2/c71ddf9e0475e4a3987bf3d616723e9d055a79c4bb6d92cf34fc67be2d9
c5134/diff:/var/lib/docker/overlay2/faece61d8cf45bab42cd499befcdb8d4548229311b35600cced068a3ce186025/diff:/var/lib/docker/overlay2/5e45db6ab7620f8baf25ff5ff01c806cd64575cdc89e5bb2965a25800093d5f9/diff:/var/lib/docker/overlay2/7324fe47bf39776cd61539f09f63fbb6f5e17aedc985bcc48bc12cdaaff4c4e4/diff:/var/lib/docker/overlay2/98a50fa9fa33654b2ed4ce8f7118f04d7004260519abca0d403b2ab337e9c273/diff:/var/lib/docker/overlay2/0e38707c109f3f1c54c7190021d525f898a86b07f2e51c86dc3e6d916eff3ed7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8149d85f63f62d52dd60db3b171b8c6ea2c3b9716491fe0cbb3b064b5fb31f01/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8149d85f63f62d52dd60db3b171b8c6ea2c3b9716491fe0cbb3b064b5fb31f01/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8149d85f63f62d52dd60db3b171b8c6ea2c3b9716491fe0cbb3b064b5fb31f01/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-20210816215712-6487",
	                "Source": "/var/lib/docker/volumes/multinode-20210816215712-6487/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-20210816215712-6487",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-20210816215712-6487",
	                "name.minikube.sigs.k8s.io": "multinode-20210816215712-6487",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "74a20767b4d75dd6c9f5d3d4a361276277f6a076f4650dd30d483a57b50126bc",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32807"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32806"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32803"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32805"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32804"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/74a20767b4d7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-20210816215712-6487": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "1f5cc52eda7f"
	                    ],
	                    "NetworkID": "4350192b6afa1eb0902b649e61cb27d7117569a74f46a5032423b4d232797889",
	                    "EndpointID": "2e17712ce111b6af24f87e5fa086d37dc831cdb2431e926f4cb63cd26b02364a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-20210816215712-6487 -n multinode-20210816215712-6487
helpers_test.go:245: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215712-6487 logs -n 25
helpers_test.go:253: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|---------------------------------------|----------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                Profile                |   User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|---------------------------------------|----------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                | json-output-20210816215401-6487       | testUser | v1.22.0 | Mon, 16 Aug 2021 21:54:01 UTC | Mon, 16 Aug 2021 21:55:31 UTC |
	|         | json-output-20210816215401-6487                   |                                       |          |         |                               |                               |
	|         | --output=json --user=testUser                     |                                       |          |         |                               |                               |
	|         | --memory=2200 --wait=true                         |                                       |          |         |                               |                               |
	|         | --driver=docker                                   |                                       |          |         |                               |                               |
	|         | --container-runtime=crio                          |                                       |          |         |                               |                               |
	| unpause | -p                                                | json-output-20210816215401-6487       | testUser | v1.22.0 | Mon, 16 Aug 2021 21:55:33 UTC | Mon, 16 Aug 2021 21:55:33 UTC |
	|         | json-output-20210816215401-6487                   |                                       |          |         |                               |                               |
	|         | --output=json --user=testUser                     |                                       |          |         |                               |                               |
	| stop    | -p                                                | json-output-20210816215401-6487       | testUser | v1.22.0 | Mon, 16 Aug 2021 21:55:33 UTC | Mon, 16 Aug 2021 21:55:44 UTC |
	|         | json-output-20210816215401-6487                   |                                       |          |         |                               |                               |
	|         | --output=json --user=testUser                     |                                       |          |         |                               |                               |
	| delete  | -p                                                | json-output-20210816215401-6487       | jenkins  | v1.22.0 | Mon, 16 Aug 2021 21:55:45 UTC | Mon, 16 Aug 2021 21:55:50 UTC |
	|         | json-output-20210816215401-6487                   |                                       |          |         |                               |                               |
	| delete  | -p                                                | json-output-error-20210816215550-6487 | jenkins  | v1.22.0 | Mon, 16 Aug 2021 21:55:50 UTC | Mon, 16 Aug 2021 21:55:50 UTC |
	|         | json-output-error-20210816215550-6487             |                                       |          |         |                               |                               |
	| start   | -p                                                | docker-network-20210816215550-6487    | jenkins  | v1.22.0 | Mon, 16 Aug 2021 21:55:50 UTC | Mon, 16 Aug 2021 21:56:17 UTC |
	|         | docker-network-20210816215550-6487                |                                       |          |         |                               |                               |
	|         | --network=                                        |                                       |          |         |                               |                               |
	| delete  | -p                                                | docker-network-20210816215550-6487    | jenkins  | v1.22.0 | Mon, 16 Aug 2021 21:56:17 UTC | Mon, 16 Aug 2021 21:56:20 UTC |
	|         | docker-network-20210816215550-6487                |                                       |          |         |                               |                               |
	| start   | -p                                                | docker-network-20210816215620-6487    | jenkins  | v1.22.0 | Mon, 16 Aug 2021 21:56:20 UTC | Mon, 16 Aug 2021 21:56:44 UTC |
	|         | docker-network-20210816215620-6487                |                                       |          |         |                               |                               |
	|         | --network=bridge                                  |                                       |          |         |                               |                               |
	| delete  | -p                                                | docker-network-20210816215620-6487    | jenkins  | v1.22.0 | Mon, 16 Aug 2021 21:56:44 UTC | Mon, 16 Aug 2021 21:56:47 UTC |
	|         | docker-network-20210816215620-6487                |                                       |          |         |                               |                               |
	| start   | -p                                                | existing-network-20210816215647-6487  | jenkins  | v1.22.0 | Mon, 16 Aug 2021 21:56:47 UTC | Mon, 16 Aug 2021 21:57:09 UTC |
	|         | existing-network-20210816215647-6487              |                                       |          |         |                               |                               |
	|         | --network=existing-network                        |                                       |          |         |                               |                               |
	| delete  | -p                                                | existing-network-20210816215647-6487  | jenkins  | v1.22.0 | Mon, 16 Aug 2021 21:57:09 UTC | Mon, 16 Aug 2021 21:57:12 UTC |
	|         | existing-network-20210816215647-6487              |                                       |          |         |                               |                               |
	| start   | -p                                                | multinode-20210816215712-6487         | jenkins  | v1.22.0 | Mon, 16 Aug 2021 21:57:12 UTC | Mon, 16 Aug 2021 21:58:48 UTC |
	|         | multinode-20210816215712-6487                     |                                       |          |         |                               |                               |
	|         | --wait=true --memory=2200                         |                                       |          |         |                               |                               |
	|         | --nodes=2 -v=8                                    |                                       |          |         |                               |                               |
	|         | --alsologtostderr                                 |                                       |          |         |                               |                               |
	|         | --driver=docker                                   |                                       |          |         |                               |                               |
	|         | --container-runtime=crio                          |                                       |          |         |                               |                               |
	| kubectl | -p multinode-20210816215712-6487 -- apply -f      | multinode-20210816215712-6487         | jenkins  | v1.22.0 | Mon, 16 Aug 2021 21:58:48 UTC | Mon, 16 Aug 2021 21:58:48 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                                       |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210816215712-6487         | jenkins  | v1.22.0 | Mon, 16 Aug 2021 21:58:48 UTC | Mon, 16 Aug 2021 21:59:12 UTC |
	|         | multinode-20210816215712-6487                     |                                       |          |         |                               |                               |
	|         | -- rollout status                                 |                                       |          |         |                               |                               |
	|         | deployment/busybox                                |                                       |          |         |                               |                               |
	| kubectl | -p multinode-20210816215712-6487                  | multinode-20210816215712-6487         | jenkins  | v1.22.0 | Mon, 16 Aug 2021 21:59:12 UTC | Mon, 16 Aug 2021 21:59:12 UTC |
	|         | -- get pods -o                                    |                                       |          |         |                               |                               |
	|         | jsonpath='{.items[*].status.podIP}'               |                                       |          |         |                               |                               |
	| kubectl | -p multinode-20210816215712-6487                  | multinode-20210816215712-6487         | jenkins  | v1.22.0 | Mon, 16 Aug 2021 21:59:12 UTC | Mon, 16 Aug 2021 21:59:12 UTC |
	|         | -- get pods -o                                    |                                       |          |         |                               |                               |
	|         | jsonpath='{.items[*].metadata.name}'              |                                       |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210816215712-6487         | jenkins  | v1.22.0 | Mon, 16 Aug 2021 21:59:12 UTC | Mon, 16 Aug 2021 21:59:13 UTC |
	|         | multinode-20210816215712-6487                     |                                       |          |         |                               |                               |
	|         | -- exec                                           |                                       |          |         |                               |                               |
	|         | busybox-84b6686758-lw52x --                       |                                       |          |         |                               |                               |
	|         | nslookup kubernetes.io                            |                                       |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210816215712-6487         | jenkins  | v1.22.0 | Mon, 16 Aug 2021 21:59:13 UTC | Mon, 16 Aug 2021 21:59:13 UTC |
	|         | multinode-20210816215712-6487                     |                                       |          |         |                               |                               |
	|         | -- exec                                           |                                       |          |         |                               |                               |
	|         | busybox-84b6686758-v4kzv --                       |                                       |          |         |                               |                               |
	|         | nslookup kubernetes.io                            |                                       |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210816215712-6487         | jenkins  | v1.22.0 | Mon, 16 Aug 2021 21:59:13 UTC | Mon, 16 Aug 2021 21:59:13 UTC |
	|         | multinode-20210816215712-6487                     |                                       |          |         |                               |                               |
	|         | -- exec                                           |                                       |          |         |                               |                               |
	|         | busybox-84b6686758-lw52x --                       |                                       |          |         |                               |                               |
	|         | nslookup kubernetes.default                       |                                       |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210816215712-6487         | jenkins  | v1.22.0 | Mon, 16 Aug 2021 21:59:13 UTC | Mon, 16 Aug 2021 21:59:13 UTC |
	|         | multinode-20210816215712-6487                     |                                       |          |         |                               |                               |
	|         | -- exec                                           |                                       |          |         |                               |                               |
	|         | busybox-84b6686758-v4kzv --                       |                                       |          |         |                               |                               |
	|         | nslookup kubernetes.default                       |                                       |          |         |                               |                               |
	| kubectl | -p multinode-20210816215712-6487                  | multinode-20210816215712-6487         | jenkins  | v1.22.0 | Mon, 16 Aug 2021 21:59:13 UTC | Mon, 16 Aug 2021 21:59:13 UTC |
	|         | -- exec busybox-84b6686758-lw52x                  |                                       |          |         |                               |                               |
	|         | -- nslookup                                       |                                       |          |         |                               |                               |
	|         | kubernetes.default.svc.cluster.local              |                                       |          |         |                               |                               |
	| kubectl | -p multinode-20210816215712-6487                  | multinode-20210816215712-6487         | jenkins  | v1.22.0 | Mon, 16 Aug 2021 21:59:13 UTC | Mon, 16 Aug 2021 21:59:14 UTC |
	|         | -- exec busybox-84b6686758-v4kzv                  |                                       |          |         |                               |                               |
	|         | -- nslookup                                       |                                       |          |         |                               |                               |
	|         | kubernetes.default.svc.cluster.local              |                                       |          |         |                               |                               |
	| kubectl | -p multinode-20210816215712-6487                  | multinode-20210816215712-6487         | jenkins  | v1.22.0 | Mon, 16 Aug 2021 21:59:14 UTC | Mon, 16 Aug 2021 21:59:14 UTC |
	|         | -- get pods -o                                    |                                       |          |         |                               |                               |
	|         | jsonpath='{.items[*].metadata.name}'              |                                       |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210816215712-6487         | jenkins  | v1.22.0 | Mon, 16 Aug 2021 21:59:14 UTC | Mon, 16 Aug 2021 21:59:14 UTC |
	|         | multinode-20210816215712-6487                     |                                       |          |         |                               |                               |
	|         | -- exec                                           |                                       |          |         |                               |                               |
	|         | busybox-84b6686758-lw52x                          |                                       |          |         |                               |                               |
	|         | -- sh -c nslookup                                 |                                       |          |         |                               |                               |
	|         | host.minikube.internal | awk                      |                                       |          |         |                               |                               |
	|         | 'NR==5' | cut -d' ' -f3                           |                                       |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210816215712-6487         | jenkins  | v1.22.0 | Mon, 16 Aug 2021 21:59:14 UTC | Mon, 16 Aug 2021 21:59:14 UTC |
	|         | multinode-20210816215712-6487                     |                                       |          |         |                               |                               |
	|         | -- exec                                           |                                       |          |         |                               |                               |
	|         | busybox-84b6686758-v4kzv                          |                                       |          |         |                               |                               |
	|         | -- sh -c nslookup                                 |                                       |          |         |                               |                               |
	|         | host.minikube.internal | awk                      |                                       |          |         |                               |                               |
	|         | 'NR==5' | cut -d' ' -f3                           |                                       |          |         |                               |                               |
	|---------|---------------------------------------------------|---------------------------------------|----------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 21:57:12
	Running on machine: debian-jenkins-agent-13
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 21:57:12.413542   71539 out.go:298] Setting OutFile to fd 1 ...
	I0816 21:57:12.413638   71539 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 21:57:12.413653   71539 out.go:311] Setting ErrFile to fd 2...
	I0816 21:57:12.413658   71539 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 21:57:12.413789   71539 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 21:57:12.414091   71539 out.go:305] Setting JSON to false
	I0816 21:57:12.448219   71539 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-13","uptime":2199,"bootTime":1629148833,"procs":172,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 21:57:12.448310   71539 start.go:121] virtualization: kvm guest
	I0816 21:57:12.450855   71539 out.go:177] * [multinode-20210816215712-6487] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 21:57:12.452531   71539 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 21:57:12.451025   71539 notify.go:169] Checking for updates...
	I0816 21:57:12.454103   71539 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 21:57:12.455443   71539 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 21:57:12.456879   71539 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 21:57:12.457063   71539 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 21:57:12.501656   71539 docker.go:132] docker version: linux-19.03.15
	I0816 21:57:12.501748   71539 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 21:57:12.578681   71539 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-16 21:57:12.536277998 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 21:57:12.578810   71539 docker.go:244] overlay module found
	I0816 21:57:12.580661   71539 out.go:177] * Using the docker driver based on user configuration
	I0816 21:57:12.580684   71539 start.go:278] selected driver: docker
	I0816 21:57:12.580691   71539 start.go:751] validating driver "docker" against <nil>
	I0816 21:57:12.580713   71539 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0816 21:57:12.580772   71539 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 21:57:12.580792   71539 out.go:242] ! Your cgroup does not allow setting memory.
	I0816 21:57:12.582099   71539 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 21:57:12.582886   71539 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 21:57:12.664677   71539 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-16 21:57:12.615749592 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 21:57:12.664802   71539 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0816 21:57:12.664948   71539 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 21:57:12.664969   71539 cni.go:93] Creating CNI manager for ""
	I0816 21:57:12.664979   71539 cni.go:154] 0 nodes found, recommending kindnet
	I0816 21:57:12.664990   71539 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 21:57:12.664999   71539 start_flags.go:277] config:
	{Name:multinode-20210816215712-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210816215712-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0816 21:57:12.666963   71539 out.go:177] * Starting control plane node multinode-20210816215712-6487 in cluster multinode-20210816215712-6487
	I0816 21:57:12.667010   71539 cache.go:117] Beginning downloading kic base image for docker with crio
	I0816 21:57:12.668366   71539 out.go:177] * Pulling base image ...
	I0816 21:57:12.668389   71539 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0816 21:57:12.668416   71539 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0816 21:57:12.668436   71539 cache.go:56] Caching tarball of preloaded images
	I0816 21:57:12.668493   71539 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0816 21:57:12.668619   71539 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 21:57:12.668635   71539 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0816 21:57:12.668957   71539 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/config.json ...
	I0816 21:57:12.668990   71539 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/config.json: {Name:mkb1441188cbcd5cb552898adfafba4d1bf71608 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 21:57:12.751668   71539 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0816 21:57:12.751705   71539 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0816 21:57:12.751720   71539 cache.go:205] Successfully downloaded all kic artifacts
	I0816 21:57:12.751759   71539 start.go:313] acquiring machines lock for multinode-20210816215712-6487: {Name:mk03f42aeb71bc32bb2f31c2823823af74c38491 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 21:57:12.751882   71539 start.go:317] acquired machines lock for "multinode-20210816215712-6487" in 104.844µs
	I0816 21:57:12.751922   71539 start.go:89] Provisioning new machine with config: &{Name:multinode-20210816215712-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210816215712-6487 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 21:57:12.752014   71539 start.go:126] createHost starting for "" (driver="docker")
	I0816 21:57:12.754181   71539 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0816 21:57:12.754393   71539 start.go:160] libmachine.API.Create for "multinode-20210816215712-6487" (driver="docker")
	I0816 21:57:12.754419   71539 client.go:168] LocalClient.Create starting
	I0816 21:57:12.754479   71539 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem
	I0816 21:57:12.754515   71539 main.go:130] libmachine: Decoding PEM data...
	I0816 21:57:12.754538   71539 main.go:130] libmachine: Parsing certificate...
	I0816 21:57:12.754682   71539 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem
	I0816 21:57:12.754711   71539 main.go:130] libmachine: Decoding PEM data...
	I0816 21:57:12.754734   71539 main.go:130] libmachine: Parsing certificate...
	I0816 21:57:12.755103   71539 cli_runner.go:115] Run: docker network inspect multinode-20210816215712-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0816 21:57:12.790386   71539 cli_runner.go:162] docker network inspect multinode-20210816215712-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0816 21:57:12.790483   71539 network_create.go:255] running [docker network inspect multinode-20210816215712-6487] to gather additional debugging logs...
	I0816 21:57:12.790502   71539 cli_runner.go:115] Run: docker network inspect multinode-20210816215712-6487
	W0816 21:57:12.825474   71539 cli_runner.go:162] docker network inspect multinode-20210816215712-6487 returned with exit code 1
	I0816 21:57:12.825502   71539 network_create.go:258] error running [docker network inspect multinode-20210816215712-6487]: docker network inspect multinode-20210816215712-6487: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20210816215712-6487
	I0816 21:57:12.825514   71539 network_create.go:260] output of [docker network inspect multinode-20210816215712-6487]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20210816215712-6487
	
	** /stderr **
	I0816 21:57:12.825563   71539 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 21:57:12.860857   71539 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000140ec0] misses:0}
	I0816 21:57:12.860893   71539 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0816 21:57:12.860908   71539 network_create.go:106] attempt to create docker network multinode-20210816215712-6487 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0816 21:57:12.860947   71539 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20210816215712-6487
	I0816 21:57:12.925282   71539 network_create.go:90] docker network multinode-20210816215712-6487 192.168.49.0/24 created
	I0816 21:57:12.925305   71539 kic.go:106] calculated static IP "192.168.49.2" for the "multinode-20210816215712-6487" container
	I0816 21:57:12.925352   71539 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0816 21:57:12.960383   71539 cli_runner.go:115] Run: docker volume create multinode-20210816215712-6487 --label name.minikube.sigs.k8s.io=multinode-20210816215712-6487 --label created_by.minikube.sigs.k8s.io=true
	I0816 21:57:12.996529   71539 oci.go:102] Successfully created a docker volume multinode-20210816215712-6487
	I0816 21:57:12.996606   71539 cli_runner.go:115] Run: docker run --rm --name multinode-20210816215712-6487-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20210816215712-6487 --entrypoint /usr/bin/test -v multinode-20210816215712-6487:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0816 21:57:14.030300   71539 cli_runner.go:168] Completed: docker run --rm --name multinode-20210816215712-6487-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20210816215712-6487 --entrypoint /usr/bin/test -v multinode-20210816215712-6487:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib: (1.033629973s)
	I0816 21:57:14.030338   71539 oci.go:106] Successfully prepared a docker volume multinode-20210816215712-6487
	W0816 21:57:14.030379   71539 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0816 21:57:14.030391   71539 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0816 21:57:14.030423   71539 kic.go:179] Starting extracting preloaded images to volume ...
	I0816 21:57:14.030480   71539 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20210816215712-6487:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	W0816 21:57:14.030391   71539 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0816 21:57:14.030540   71539 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0816 21:57:14.112012   71539 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-20210816215712-6487 --name multinode-20210816215712-6487 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20210816215712-6487 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-20210816215712-6487 --network multinode-20210816215712-6487 --ip 192.168.49.2 --volume multinode-20210816215712-6487:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0816 21:57:14.613684   71539 cli_runner.go:115] Run: docker container inspect multinode-20210816215712-6487 --format={{.State.Running}}
	I0816 21:57:14.656387   71539 cli_runner.go:115] Run: docker container inspect multinode-20210816215712-6487 --format={{.State.Status}}
	I0816 21:57:14.702774   71539 cli_runner.go:115] Run: docker exec multinode-20210816215712-6487 stat /var/lib/dpkg/alternatives/iptables
	I0816 21:57:14.837894   71539 oci.go:278] the created container "multinode-20210816215712-6487" has a running status.
	I0816 21:57:14.837935   71539 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/multinode-20210816215712-6487/id_rsa...
	I0816 21:57:15.121347   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/multinode-20210816215712-6487/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0816 21:57:15.121405   71539 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/multinode-20210816215712-6487/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0816 21:57:15.510263   71539 cli_runner.go:115] Run: docker container inspect multinode-20210816215712-6487 --format={{.State.Status}}
	I0816 21:57:15.549714   71539 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0816 21:57:15.549736   71539 kic_runner.go:115] Args: [docker exec --privileged multinode-20210816215712-6487 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0816 21:57:17.391956   71539 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20210816215712-6487:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (3.36140354s)
	I0816 21:57:17.391992   71539 kic.go:188] duration metric: took 3.361567 seconds to extract preloaded images to volume
	I0816 21:57:17.392058   71539 cli_runner.go:115] Run: docker container inspect multinode-20210816215712-6487 --format={{.State.Status}}
	I0816 21:57:17.428940   71539 machine.go:88] provisioning docker machine ...
	I0816 21:57:17.428993   71539 ubuntu.go:169] provisioning hostname "multinode-20210816215712-6487"
	I0816 21:57:17.429041   71539 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210816215712-6487
	I0816 21:57:17.465043   71539 main.go:130] libmachine: Using SSH client type: native
	I0816 21:57:17.465236   71539 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
	I0816 21:57:17.465258   71539 main.go:130] libmachine: About to run SSH command:
	sudo hostname multinode-20210816215712-6487 && echo "multinode-20210816215712-6487" | sudo tee /etc/hostname
	I0816 21:57:17.603325   71539 main.go:130] libmachine: SSH cmd err, output: <nil>: multinode-20210816215712-6487
	
	I0816 21:57:17.603410   71539 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210816215712-6487
	I0816 21:57:17.642582   71539 main.go:130] libmachine: Using SSH client type: native
	I0816 21:57:17.642719   71539 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
	I0816 21:57:17.642744   71539 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20210816215712-6487' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20210816215712-6487/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20210816215712-6487' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 21:57:17.763291   71539 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 21:57:17.763320   71539 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem ServerCertRemotePath:/e
tc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 21:57:17.763339   71539 ubuntu.go:177] setting up certificates
	I0816 21:57:17.763349   71539 provision.go:83] configureAuth start
	I0816 21:57:17.763391   71539 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210816215712-6487
	I0816 21:57:17.800053   71539 provision.go:138] copyHostCerts
	I0816 21:57:17.800087   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 21:57:17.800112   71539 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 21:57:17.800118   71539 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 21:57:17.800164   71539 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 21:57:17.800231   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 21:57:17.800251   71539 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 21:57:17.800255   71539 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 21:57:17.800271   71539 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 21:57:17.800316   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 21:57:17.800332   71539 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 21:57:17.800338   71539 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 21:57:17.800353   71539 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0816 21:57:17.800396   71539 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.multinode-20210816215712-6487 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-20210816215712-6487]
	I0816 21:57:17.848232   71539 provision.go:172] copyRemoteCerts
	I0816 21:57:17.848283   71539 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 21:57:17.848313   71539 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210816215712-6487
	I0816 21:57:17.885022   71539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/multinode-20210816215712-6487/id_rsa Username:docker}
	I0816 21:57:17.974576   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 21:57:17.974630   71539 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 21:57:17.989908   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 21:57:17.989947   71539 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0816 21:57:18.004612   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 21:57:18.004649   71539 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 21:57:18.019749   71539 provision.go:86] duration metric: configureAuth took 256.388474ms
	I0816 21:57:18.019771   71539 ubuntu.go:193] setting minikube options for container-runtime
	I0816 21:57:18.019897   71539 config.go:177] Loaded profile config "multinode-20210816215712-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 21:57:18.020042   71539 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210816215712-6487
	I0816 21:57:18.056545   71539 main.go:130] libmachine: Using SSH client type: native
	I0816 21:57:18.056684   71539 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
	I0816 21:57:18.056702   71539 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 21:57:18.406241   71539 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 21:57:18.406270   71539 machine.go:91] provisioned docker machine in 977.302493ms
	I0816 21:57:18.406280   71539 client.go:171] LocalClient.Create took 5.651855676s
	I0816 21:57:18.406296   71539 start.go:168] duration metric: libmachine.API.Create for "multinode-20210816215712-6487" took 5.651903349s
	I0816 21:57:18.406306   71539 start.go:267] post-start starting for "multinode-20210816215712-6487" (driver="docker")
	I0816 21:57:18.406312   71539 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 21:57:18.406376   71539 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 21:57:18.406426   71539 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210816215712-6487
	I0816 21:57:18.442709   71539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/multinode-20210816215712-6487/id_rsa Username:docker}
	I0816 21:57:18.530646   71539 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 21:57:18.533085   71539 command_runner.go:124] > NAME="Ubuntu"
	I0816 21:57:18.533107   71539 command_runner.go:124] > VERSION="20.04.2 LTS (Focal Fossa)"
	I0816 21:57:18.533114   71539 command_runner.go:124] > ID=ubuntu
	I0816 21:57:18.533121   71539 command_runner.go:124] > ID_LIKE=debian
	I0816 21:57:18.533129   71539 command_runner.go:124] > PRETTY_NAME="Ubuntu 20.04.2 LTS"
	I0816 21:57:18.533135   71539 command_runner.go:124] > VERSION_ID="20.04"
	I0816 21:57:18.533142   71539 command_runner.go:124] > HOME_URL="https://www.ubuntu.com/"
	I0816 21:57:18.533148   71539 command_runner.go:124] > SUPPORT_URL="https://help.ubuntu.com/"
	I0816 21:57:18.533153   71539 command_runner.go:124] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0816 21:57:18.533164   71539 command_runner.go:124] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0816 21:57:18.533170   71539 command_runner.go:124] > VERSION_CODENAME=focal
	I0816 21:57:18.533174   71539 command_runner.go:124] > UBUNTU_CODENAME=focal
	I0816 21:57:18.533242   71539 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0816 21:57:18.533259   71539 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0816 21:57:18.533268   71539 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0816 21:57:18.533273   71539 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0816 21:57:18.533287   71539 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 21:57:18.533343   71539 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 21:57:18.533470   71539 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem -> 64872.pem in /etc/ssl/certs
	I0816 21:57:18.533484   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem -> /etc/ssl/certs/64872.pem
	I0816 21:57:18.533601   71539 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 21:57:18.539403   71539 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /etc/ssl/certs/64872.pem (1708 bytes)
	I0816 21:57:18.554873   71539 start.go:270] post-start completed in 148.55578ms
	I0816 21:57:18.555163   71539 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210816215712-6487
	I0816 21:57:18.592535   71539 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/config.json ...
	I0816 21:57:18.592769   71539 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 21:57:18.592817   71539 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210816215712-6487
	I0816 21:57:18.628857   71539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/multinode-20210816215712-6487/id_rsa Username:docker}
	I0816 21:57:18.715753   71539 command_runner.go:124] > 29%!
	(MISSING)I0816 21:57:18.715786   71539 start.go:129] duration metric: createHost completed in 5.963763273s
	I0816 21:57:18.715797   71539 start.go:80] releasing machines lock for "multinode-20210816215712-6487", held for 5.963903119s
	I0816 21:57:18.715867   71539 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210816215712-6487
	I0816 21:57:18.752399   71539 ssh_runner.go:149] Run: systemctl --version
	I0816 21:57:18.752448   71539 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210816215712-6487
	I0816 21:57:18.752476   71539 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 21:57:18.752540   71539 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210816215712-6487
	I0816 21:57:18.790841   71539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/multinode-20210816215712-6487/id_rsa Username:docker}
	I0816 21:57:18.791274   71539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/multinode-20210816215712-6487/id_rsa Username:docker}
	I0816 21:57:18.905997   71539 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0816 21:57:18.906027   71539 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0816 21:57:18.906036   71539 command_runner.go:124] > <H1>302 Moved</H1>
	I0816 21:57:18.906044   71539 command_runner.go:124] > The document has moved
	I0816 21:57:18.906053   71539 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0816 21:57:18.906058   71539 command_runner.go:124] > </BODY></HTML>
	I0816 21:57:18.906085   71539 command_runner.go:124] > systemd 245 (245.4-4ubuntu3.11)
	I0816 21:57:18.906108   71539 command_runner.go:124] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0816 21:57:18.906174   71539 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0816 21:57:18.922381   71539 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0816 21:57:18.930121   71539 docker.go:153] disabling docker service ...
	I0816 21:57:18.930172   71539 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 21:57:18.940327   71539 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 21:57:18.948065   71539 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 21:57:18.956003   71539 command_runner.go:124] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0816 21:57:19.009446   71539 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 21:57:19.071684   71539 command_runner.go:124] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0816 21:57:19.071753   71539 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 21:57:19.079967   71539 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 21:57:19.090435   71539 command_runner.go:124] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0816 21:57:19.090459   71539 command_runner.go:124] > image-endpoint: unix:///var/run/crio/crio.sock
	I0816 21:57:19.091030   71539 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0816 21:57:19.098031   71539 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0816 21:57:19.098051   71539 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0816 21:57:19.105009   71539 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 21:57:19.110000   71539 command_runner.go:124] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 21:57:19.110434   71539 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 21:57:19.110468   71539 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 21:57:19.116563   71539 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 21:57:19.122001   71539 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 21:57:19.178062   71539 ssh_runner.go:149] Run: sudo systemctl start crio
	I0816 21:57:19.186332   71539 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 21:57:19.186390   71539 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 21:57:19.189105   71539 command_runner.go:124] >   File: /var/run/crio/crio.sock
	I0816 21:57:19.189129   71539 command_runner.go:124] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0816 21:57:19.189140   71539 command_runner.go:124] > Device: 35h/53d	Inode: 361214      Links: 1
	I0816 21:57:19.189151   71539 command_runner.go:124] > Access: (0755/srwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0816 21:57:19.189163   71539 command_runner.go:124] > Access: 2021-08-16 21:57:18.391936306 +0000
	I0816 21:57:19.189179   71539 command_runner.go:124] > Modify: 2021-08-16 21:57:18.391936306 +0000
	I0816 21:57:19.189185   71539 command_runner.go:124] > Change: 2021-08-16 21:57:18.391936306 +0000
	I0816 21:57:19.189193   71539 command_runner.go:124] >  Birth: -
	I0816 21:57:19.189208   71539 start.go:413] Will wait 60s for crictl version
	I0816 21:57:19.189257   71539 ssh_runner.go:149] Run: sudo crictl version
	I0816 21:57:19.215213   71539 command_runner.go:124] > Version:  0.1.0
	I0816 21:57:19.215237   71539 command_runner.go:124] > RuntimeName:  cri-o
	I0816 21:57:19.215241   71539 command_runner.go:124] > RuntimeVersion:  1.20.3
	I0816 21:57:19.215250   71539 command_runner.go:124] > RuntimeApiVersion:  v1alpha1
	I0816 21:57:19.216696   71539 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0816 21:57:19.216768   71539 ssh_runner.go:149] Run: crio --version
	I0816 21:57:19.271152   71539 command_runner.go:124] > crio version 1.20.3
	I0816 21:57:19.271176   71539 command_runner.go:124] > Version:       1.20.3
	I0816 21:57:19.271186   71539 command_runner.go:124] > GitCommit:     50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d
	I0816 21:57:19.271193   71539 command_runner.go:124] > GitTreeState:  clean
	I0816 21:57:19.271205   71539 command_runner.go:124] > BuildDate:     2021-07-14T23:38:00Z
	I0816 21:57:19.271211   71539 command_runner.go:124] > GoVersion:     go1.15.2
	I0816 21:57:19.271216   71539 command_runner.go:124] > Compiler:      gc
	I0816 21:57:19.271221   71539 command_runner.go:124] > Platform:      linux/amd64
	I0816 21:57:19.271226   71539 command_runner.go:124] > Linkmode:      dynamic
	I0816 21:57:19.272367   71539 command_runner.go:124] ! time="2021-08-16T21:57:19Z" level=info msg="Starting CRI-O, version: 1.20.3, git: 50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d(clean)"
	I0816 21:57:19.272454   71539 ssh_runner.go:149] Run: crio --version
	I0816 21:57:19.327361   71539 command_runner.go:124] > crio version 1.20.3
	I0816 21:57:19.327383   71539 command_runner.go:124] > Version:       1.20.3
	I0816 21:57:19.327402   71539 command_runner.go:124] > GitCommit:     50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d
	I0816 21:57:19.327408   71539 command_runner.go:124] > GitTreeState:  clean
	I0816 21:57:19.327417   71539 command_runner.go:124] > BuildDate:     2021-07-14T23:38:00Z
	I0816 21:57:19.327428   71539 command_runner.go:124] > GoVersion:     go1.15.2
	I0816 21:57:19.327434   71539 command_runner.go:124] > Compiler:      gc
	I0816 21:57:19.327441   71539 command_runner.go:124] > Platform:      linux/amd64
	I0816 21:57:19.327450   71539 command_runner.go:124] > Linkmode:      dynamic
	I0816 21:57:19.328588   71539 command_runner.go:124] ! time="2021-08-16T21:57:19Z" level=info msg="Starting CRI-O, version: 1.20.3, git: 50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d(clean)"
	I0816 21:57:19.332617   71539 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	I0816 21:57:19.332700   71539 cli_runner.go:115] Run: docker network inspect multinode-20210816215712-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 21:57:19.369090   71539 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0816 21:57:19.372356   71539 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 21:57:19.381333   71539 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0816 21:57:19.381384   71539 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 21:57:19.423459   71539 command_runner.go:124] > {
	I0816 21:57:19.423482   71539 command_runner.go:124] >   "images": [
	I0816 21:57:19.423488   71539 command_runner.go:124] >     {
	I0816 21:57:19.423500   71539 command_runner.go:124] >       "id": "6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb",
	I0816 21:57:19.423507   71539 command_runner.go:124] >       "repoTags": [
	I0816 21:57:19.423518   71539 command_runner.go:124] >         "docker.io/kindest/kindnetd:v20210326-1e038dc5"
	I0816 21:57:19.423527   71539 command_runner.go:124] >       ],
	I0816 21:57:19.423535   71539 command_runner.go:124] >       "repoDigests": [
	I0816 21:57:19.423552   71539 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1",
	I0816 21:57:19.423568   71539 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"
	I0816 21:57:19.423577   71539 command_runner.go:124] >       ],
	I0816 21:57:19.423587   71539 command_runner.go:124] >       "size": "119984626",
	I0816 21:57:19.423596   71539 command_runner.go:124] >       "uid": null,
	I0816 21:57:19.423612   71539 command_runner.go:124] >       "username": "",
	I0816 21:57:19.423625   71539 command_runner.go:124] >       "spec": null
	I0816 21:57:19.423634   71539 command_runner.go:124] >     },
	I0816 21:57:19.423640   71539 command_runner.go:124] >     {
	I0816 21:57:19.423653   71539 command_runner.go:124] >       "id": "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db",
	I0816 21:57:19.423663   71539 command_runner.go:124] >       "repoTags": [
	I0816 21:57:19.423676   71539 command_runner.go:124] >         "docker.io/kubernetesui/dashboard:v2.1.0"
	I0816 21:57:19.423684   71539 command_runner.go:124] >       ],
	I0816 21:57:19.423694   71539 command_runner.go:124] >       "repoDigests": [
	I0816 21:57:19.423712   71539 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f",
	I0816 21:57:19.423728   71539 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6"
	I0816 21:57:19.423736   71539 command_runner.go:124] >       ],
	I0816 21:57:19.423744   71539 command_runner.go:124] >       "size": "228528983",
	I0816 21:57:19.423752   71539 command_runner.go:124] >       "uid": null,
	I0816 21:57:19.423759   71539 command_runner.go:124] >       "username": "nonroot",
	I0816 21:57:19.423771   71539 command_runner.go:124] >       "spec": null
	I0816 21:57:19.423779   71539 command_runner.go:124] >     },
	I0816 21:57:19.423785   71539 command_runner.go:124] >     {
	I0816 21:57:19.423799   71539 command_runner.go:124] >       "id": "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4",
	I0816 21:57:19.423808   71539 command_runner.go:124] >       "repoTags": [
	I0816 21:57:19.423820   71539 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper:v1.0.4"
	I0816 21:57:19.423828   71539 command_runner.go:124] >       ],
	I0816 21:57:19.423836   71539 command_runner.go:124] >       "repoDigests": [
	I0816 21:57:19.423852   71539 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf",
	I0816 21:57:19.423869   71539 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:d78f995c07124874c2a2e9b404cffa6bc6233668d63d6c6210574971f3d5914b"
	I0816 21:57:19.423879   71539 command_runner.go:124] >       ],
	I0816 21:57:19.423890   71539 command_runner.go:124] >       "size": "36950651",
	I0816 21:57:19.423899   71539 command_runner.go:124] >       "uid": null,
	I0816 21:57:19.423931   71539 command_runner.go:124] >       "username": "",
	I0816 21:57:19.423939   71539 command_runner.go:124] >       "spec": null
	I0816 21:57:19.423947   71539 command_runner.go:124] >     },
	I0816 21:57:19.423953   71539 command_runner.go:124] >     {
	I0816 21:57:19.423967   71539 command_runner.go:124] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0816 21:57:19.423976   71539 command_runner.go:124] >       "repoTags": [
	I0816 21:57:19.423988   71539 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0816 21:57:19.423997   71539 command_runner.go:124] >       ],
	I0816 21:57:19.424006   71539 command_runner.go:124] >       "repoDigests": [
	I0816 21:57:19.424021   71539 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0816 21:57:19.424037   71539 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0816 21:57:19.424046   71539 command_runner.go:124] >       ],
	I0816 21:57:19.424053   71539 command_runner.go:124] >       "size": "31470524",
	I0816 21:57:19.424066   71539 command_runner.go:124] >       "uid": null,
	I0816 21:57:19.424076   71539 command_runner.go:124] >       "username": "",
	I0816 21:57:19.424086   71539 command_runner.go:124] >       "spec": null
	I0816 21:57:19.424094   71539 command_runner.go:124] >     },
	I0816 21:57:19.424099   71539 command_runner.go:124] >     {
	I0816 21:57:19.424110   71539 command_runner.go:124] >       "id": "296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899",
	I0816 21:57:19.424120   71539 command_runner.go:124] >       "repoTags": [
	I0816 21:57:19.424131   71539 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns:v1.8.0"
	I0816 21:57:19.424139   71539 command_runner.go:124] >       ],
	I0816 21:57:19.424147   71539 command_runner.go:124] >       "repoDigests": [
	I0816 21:57:19.424163   71539 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61",
	I0816 21:57:19.424178   71539 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e"
	I0816 21:57:19.424186   71539 command_runner.go:124] >       ],
	I0816 21:57:19.424193   71539 command_runner.go:124] >       "size": "42585056",
	I0816 21:57:19.424202   71539 command_runner.go:124] >       "uid": null,
	I0816 21:57:19.424211   71539 command_runner.go:124] >       "username": "",
	I0816 21:57:19.424218   71539 command_runner.go:124] >       "spec": null
	I0816 21:57:19.424226   71539 command_runner.go:124] >     },
	I0816 21:57:19.424232   71539 command_runner.go:124] >     {
	I0816 21:57:19.424245   71539 command_runner.go:124] >       "id": "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934",
	I0816 21:57:19.424254   71539 command_runner.go:124] >       "repoTags": [
	I0816 21:57:19.424264   71539 command_runner.go:124] >         "k8s.gcr.io/etcd:3.4.13-0"
	I0816 21:57:19.424272   71539 command_runner.go:124] >       ],
	I0816 21:57:19.424280   71539 command_runner.go:124] >       "repoDigests": [
	I0816 21:57:19.424295   71539 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2",
	I0816 21:57:19.424311   71539 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:bd4d2c9a19be8a492bc79df53eee199fd04b415e9993eb69f7718052602a147a"
	I0816 21:57:19.424320   71539 command_runner.go:124] >       ],
	I0816 21:57:19.424329   71539 command_runner.go:124] >       "size": "254662613",
	I0816 21:57:19.424336   71539 command_runner.go:124] >       "uid": null,
	I0816 21:57:19.424346   71539 command_runner.go:124] >       "username": "",
	I0816 21:57:19.424355   71539 command_runner.go:124] >       "spec": null
	I0816 21:57:19.424363   71539 command_runner.go:124] >     },
	I0816 21:57:19.424369   71539 command_runner.go:124] >     {
	I0816 21:57:19.424382   71539 command_runner.go:124] >       "id": "3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80",
	I0816 21:57:19.424392   71539 command_runner.go:124] >       "repoTags": [
	I0816 21:57:19.424402   71539 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver:v1.21.3"
	I0816 21:57:19.424408   71539 command_runner.go:124] >       ],
	I0816 21:57:19.424415   71539 command_runner.go:124] >       "repoDigests": [
	I0816 21:57:19.424427   71539 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2",
	I0816 21:57:19.424442   71539 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:910cfdf034262c7b68ecb17c0885f39bdaaad07d87c9a5b6320819d8500b7ee5"
	I0816 21:57:19.424450   71539 command_runner.go:124] >       ],
	I0816 21:57:19.424457   71539 command_runner.go:124] >       "size": "126878961",
	I0816 21:57:19.424466   71539 command_runner.go:124] >       "uid": {
	I0816 21:57:19.424475   71539 command_runner.go:124] >         "value": "0"
	I0816 21:57:19.424484   71539 command_runner.go:124] >       },
	I0816 21:57:19.424493   71539 command_runner.go:124] >       "username": "",
	I0816 21:57:19.424503   71539 command_runner.go:124] >       "spec": null
	I0816 21:57:19.424509   71539 command_runner.go:124] >     },
	I0816 21:57:19.424519   71539 command_runner.go:124] >     {
	I0816 21:57:19.424534   71539 command_runner.go:124] >       "id": "bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9",
	I0816 21:57:19.424544   71539 command_runner.go:124] >       "repoTags": [
	I0816 21:57:19.424555   71539 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager:v1.21.3"
	I0816 21:57:19.424563   71539 command_runner.go:124] >       ],
	I0816 21:57:19.424572   71539 command_runner.go:124] >       "repoDigests": [
	I0816 21:57:19.424588   71539 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b",
	I0816 21:57:19.424608   71539 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:7fb1f6614597c255b475ed8abf553e0d4e8ea211b06a90bed53eaddcfb9c354f"
	I0816 21:57:19.424618   71539 command_runner.go:124] >       ],
	I0816 21:57:19.424629   71539 command_runner.go:124] >       "size": "121087578",
	I0816 21:57:19.424638   71539 command_runner.go:124] >       "uid": {
	I0816 21:57:19.424646   71539 command_runner.go:124] >         "value": "0"
	I0816 21:57:19.424654   71539 command_runner.go:124] >       },
	I0816 21:57:19.424670   71539 command_runner.go:124] >       "username": "",
	I0816 21:57:19.424679   71539 command_runner.go:124] >       "spec": null
	I0816 21:57:19.424687   71539 command_runner.go:124] >     },
	I0816 21:57:19.424693   71539 command_runner.go:124] >     {
	I0816 21:57:19.424706   71539 command_runner.go:124] >       "id": "adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92",
	I0816 21:57:19.424717   71539 command_runner.go:124] >       "repoTags": [
	I0816 21:57:19.424729   71539 command_runner.go:124] >         "k8s.gcr.io/kube-proxy:v1.21.3"
	I0816 21:57:19.424737   71539 command_runner.go:124] >       ],
	I0816 21:57:19.424744   71539 command_runner.go:124] >       "repoDigests": [
	I0816 21:57:19.424758   71539 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b",
	I0816 21:57:19.424774   71539 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:c7778d7b97b2a822c3fe3e921d104ac42afbd38268de8df03557465780886627"
	I0816 21:57:19.424783   71539 command_runner.go:124] >       ],
	I0816 21:57:19.424791   71539 command_runner.go:124] >       "size": "105129702",
	I0816 21:57:19.424800   71539 command_runner.go:124] >       "uid": null,
	I0816 21:57:19.424809   71539 command_runner.go:124] >       "username": "",
	I0816 21:57:19.424815   71539 command_runner.go:124] >       "spec": null
	I0816 21:57:19.424823   71539 command_runner.go:124] >     },
	I0816 21:57:19.424829   71539 command_runner.go:124] >     {
	I0816 21:57:19.424841   71539 command_runner.go:124] >       "id": "6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a",
	I0816 21:57:19.424848   71539 command_runner.go:124] >       "repoTags": [
	I0816 21:57:19.424859   71539 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler:v1.21.3"
	I0816 21:57:19.424867   71539 command_runner.go:124] >       ],
	I0816 21:57:19.424874   71539 command_runner.go:124] >       "repoDigests": [
	I0816 21:57:19.424890   71539 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4",
	I0816 21:57:19.424905   71539 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:b61779ea1bd936c137b25b3a7baa5551fbbd84fed8568d15c7c85ab1139521c0"
	I0816 21:57:19.424913   71539 command_runner.go:124] >       ],
	I0816 21:57:19.424920   71539 command_runner.go:124] >       "size": "51893338",
	I0816 21:57:19.424929   71539 command_runner.go:124] >       "uid": {
	I0816 21:57:19.424940   71539 command_runner.go:124] >         "value": "0"
	I0816 21:57:19.424949   71539 command_runner.go:124] >       },
	I0816 21:57:19.424959   71539 command_runner.go:124] >       "username": "",
	I0816 21:57:19.424967   71539 command_runner.go:124] >       "spec": null
	I0816 21:57:19.424976   71539 command_runner.go:124] >     },
	I0816 21:57:19.424982   71539 command_runner.go:124] >     {
	I0816 21:57:19.424995   71539 command_runner.go:124] >       "id": "0f8457a4c2ecaceac160805013dc3c61c63a1ff3dee74a473a36249a748e0253",
	I0816 21:57:19.425003   71539 command_runner.go:124] >       "repoTags": [
	I0816 21:57:19.425011   71539 command_runner.go:124] >         "k8s.gcr.io/pause:3.4.1"
	I0816 21:57:19.425019   71539 command_runner.go:124] >       ],
	I0816 21:57:19.425026   71539 command_runner.go:124] >       "repoDigests": [
	I0816 21:57:19.425040   71539 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810",
	I0816 21:57:19.425055   71539 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:914e745e524aa94315a25b49a7fafc0aa395e332126930593225d7a513f5a6b2"
	I0816 21:57:19.425063   71539 command_runner.go:124] >       ],
	I0816 21:57:19.425068   71539 command_runner.go:124] >       "size": "689817",
	I0816 21:57:19.425073   71539 command_runner.go:124] >       "uid": null,
	I0816 21:57:19.425081   71539 command_runner.go:124] >       "username": "",
	I0816 21:57:19.425087   71539 command_runner.go:124] >       "spec": null
	I0816 21:57:19.425093   71539 command_runner.go:124] >     }
	I0816 21:57:19.425097   71539 command_runner.go:124] >   ]
	I0816 21:57:19.425103   71539 command_runner.go:124] > }
	I0816 21:57:19.425274   71539 crio.go:424] all images are preloaded for cri-o runtime.
	I0816 21:57:19.425287   71539 crio.go:333] Images already preloaded, skipping extraction
	I0816 21:57:19.425336   71539 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 21:57:19.445581   71539 command_runner.go:124] > {
	I0816 21:57:19.445606   71539 command_runner.go:124] >   "images": [
	I0816 21:57:19.445612   71539 command_runner.go:124] >     {
	I0816 21:57:19.445625   71539 command_runner.go:124] >       "id": "6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb",
	I0816 21:57:19.445632   71539 command_runner.go:124] >       "repoTags": [
	I0816 21:57:19.445643   71539 command_runner.go:124] >         "docker.io/kindest/kindnetd:v20210326-1e038dc5"
	I0816 21:57:19.445650   71539 command_runner.go:124] >       ],
	I0816 21:57:19.445660   71539 command_runner.go:124] >       "repoDigests": [
	I0816 21:57:19.445681   71539 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1",
	I0816 21:57:19.445696   71539 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"
	I0816 21:57:19.445702   71539 command_runner.go:124] >       ],
	I0816 21:57:19.445712   71539 command_runner.go:124] >       "size": "119984626",
	I0816 21:57:19.445722   71539 command_runner.go:124] >       "uid": null,
	I0816 21:57:19.445731   71539 command_runner.go:124] >       "username": "",
	I0816 21:57:19.445744   71539 command_runner.go:124] >       "spec": null
	I0816 21:57:19.445753   71539 command_runner.go:124] >     },
	I0816 21:57:19.445761   71539 command_runner.go:124] >     {
	I0816 21:57:19.445771   71539 command_runner.go:124] >       "id": "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db",
	I0816 21:57:19.445781   71539 command_runner.go:124] >       "repoTags": [
	I0816 21:57:19.445790   71539 command_runner.go:124] >         "docker.io/kubernetesui/dashboard:v2.1.0"
	I0816 21:57:19.445799   71539 command_runner.go:124] >       ],
	I0816 21:57:19.445811   71539 command_runner.go:124] >       "repoDigests": [
	I0816 21:57:19.445828   71539 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f",
	I0816 21:57:19.445845   71539 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6"
	I0816 21:57:19.445854   71539 command_runner.go:124] >       ],
	I0816 21:57:19.445861   71539 command_runner.go:124] >       "size": "228528983",
	I0816 21:57:19.445870   71539 command_runner.go:124] >       "uid": null,
	I0816 21:57:19.445881   71539 command_runner.go:124] >       "username": "nonroot",
	I0816 21:57:19.445897   71539 command_runner.go:124] >       "spec": null
	I0816 21:57:19.445905   71539 command_runner.go:124] >     },
	I0816 21:57:19.445911   71539 command_runner.go:124] >     {
	I0816 21:57:19.445923   71539 command_runner.go:124] >       "id": "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4",
	I0816 21:57:19.445932   71539 command_runner.go:124] >       "repoTags": [
	I0816 21:57:19.445943   71539 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper:v1.0.4"
	I0816 21:57:19.445950   71539 command_runner.go:124] >       ],
	I0816 21:57:19.445959   71539 command_runner.go:124] >       "repoDigests": [
	I0816 21:57:19.445975   71539 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf",
	I0816 21:57:19.445991   71539 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:d78f995c07124874c2a2e9b404cffa6bc6233668d63d6c6210574971f3d5914b"
	I0816 21:57:19.446000   71539 command_runner.go:124] >       ],
	I0816 21:57:19.446007   71539 command_runner.go:124] >       "size": "36950651",
	I0816 21:57:19.446016   71539 command_runner.go:124] >       "uid": null,
	I0816 21:57:19.446024   71539 command_runner.go:124] >       "username": "",
	I0816 21:57:19.446031   71539 command_runner.go:124] >       "spec": null
	I0816 21:57:19.446039   71539 command_runner.go:124] >     },
	I0816 21:57:19.446045   71539 command_runner.go:124] >     {
	I0816 21:57:19.446057   71539 command_runner.go:124] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0816 21:57:19.446067   71539 command_runner.go:124] >       "repoTags": [
	I0816 21:57:19.446079   71539 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0816 21:57:19.446089   71539 command_runner.go:124] >       ],
	I0816 21:57:19.446096   71539 command_runner.go:124] >       "repoDigests": [
	I0816 21:57:19.446114   71539 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0816 21:57:19.446131   71539 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0816 21:57:19.446139   71539 command_runner.go:124] >       ],
	I0816 21:57:19.446146   71539 command_runner.go:124] >       "size": "31470524",
	I0816 21:57:19.446160   71539 command_runner.go:124] >       "uid": null,
	I0816 21:57:19.446170   71539 command_runner.go:124] >       "username": "",
	I0816 21:57:19.446180   71539 command_runner.go:124] >       "spec": null
	I0816 21:57:19.446188   71539 command_runner.go:124] >     },
	I0816 21:57:19.446194   71539 command_runner.go:124] >     {
	I0816 21:57:19.446206   71539 command_runner.go:124] >       "id": "296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899",
	I0816 21:57:19.446212   71539 command_runner.go:124] >       "repoTags": [
	I0816 21:57:19.446220   71539 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns:v1.8.0"
	I0816 21:57:19.446229   71539 command_runner.go:124] >       ],
	I0816 21:57:19.446236   71539 command_runner.go:124] >       "repoDigests": [
	I0816 21:57:19.446251   71539 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61",
	I0816 21:57:19.446266   71539 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e"
	I0816 21:57:19.446275   71539 command_runner.go:124] >       ],
	I0816 21:57:19.446282   71539 command_runner.go:124] >       "size": "42585056",
	I0816 21:57:19.446291   71539 command_runner.go:124] >       "uid": null,
	I0816 21:57:19.446299   71539 command_runner.go:124] >       "username": "",
	I0816 21:57:19.446307   71539 command_runner.go:124] >       "spec": null
	I0816 21:57:19.446313   71539 command_runner.go:124] >     },
	I0816 21:57:19.446318   71539 command_runner.go:124] >     {
	I0816 21:57:19.446330   71539 command_runner.go:124] >       "id": "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934",
	I0816 21:57:19.446341   71539 command_runner.go:124] >       "repoTags": [
	I0816 21:57:19.446352   71539 command_runner.go:124] >         "k8s.gcr.io/etcd:3.4.13-0"
	I0816 21:57:19.446360   71539 command_runner.go:124] >       ],
	I0816 21:57:19.446367   71539 command_runner.go:124] >       "repoDigests": [
	I0816 21:57:19.446382   71539 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2",
	I0816 21:57:19.446396   71539 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:bd4d2c9a19be8a492bc79df53eee199fd04b415e9993eb69f7718052602a147a"
	I0816 21:57:19.446405   71539 command_runner.go:124] >       ],
	I0816 21:57:19.446412   71539 command_runner.go:124] >       "size": "254662613",
	I0816 21:57:19.446421   71539 command_runner.go:124] >       "uid": null,
	I0816 21:57:19.446431   71539 command_runner.go:124] >       "username": "",
	I0816 21:57:19.446438   71539 command_runner.go:124] >       "spec": null
	I0816 21:57:19.446446   71539 command_runner.go:124] >     },
	I0816 21:57:19.446452   71539 command_runner.go:124] >     {
	I0816 21:57:19.446465   71539 command_runner.go:124] >       "id": "3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80",
	I0816 21:57:19.446475   71539 command_runner.go:124] >       "repoTags": [
	I0816 21:57:19.446485   71539 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver:v1.21.3"
	I0816 21:57:19.446493   71539 command_runner.go:124] >       ],
	I0816 21:57:19.446499   71539 command_runner.go:124] >       "repoDigests": [
	I0816 21:57:19.446511   71539 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2",
	I0816 21:57:19.446527   71539 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:910cfdf034262c7b68ecb17c0885f39bdaaad07d87c9a5b6320819d8500b7ee5"
	I0816 21:57:19.446535   71539 command_runner.go:124] >       ],
	I0816 21:57:19.446542   71539 command_runner.go:124] >       "size": "126878961",
	I0816 21:57:19.446551   71539 command_runner.go:124] >       "uid": {
	I0816 21:57:19.446562   71539 command_runner.go:124] >         "value": "0"
	I0816 21:57:19.446572   71539 command_runner.go:124] >       },
	I0816 21:57:19.446581   71539 command_runner.go:124] >       "username": "",
	I0816 21:57:19.446587   71539 command_runner.go:124] >       "spec": null
	I0816 21:57:19.446595   71539 command_runner.go:124] >     },
	I0816 21:57:19.446606   71539 command_runner.go:124] >     {
	I0816 21:57:19.446620   71539 command_runner.go:124] >       "id": "bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9",
	I0816 21:57:19.446629   71539 command_runner.go:124] >       "repoTags": [
	I0816 21:57:19.446641   71539 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager:v1.21.3"
	I0816 21:57:19.446649   71539 command_runner.go:124] >       ],
	I0816 21:57:19.446656   71539 command_runner.go:124] >       "repoDigests": [
	I0816 21:57:19.446671   71539 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b",
	I0816 21:57:19.446687   71539 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:7fb1f6614597c255b475ed8abf553e0d4e8ea211b06a90bed53eaddcfb9c354f"
	I0816 21:57:19.446694   71539 command_runner.go:124] >       ],
	I0816 21:57:19.446729   71539 command_runner.go:124] >       "size": "121087578",
	I0816 21:57:19.446739   71539 command_runner.go:124] >       "uid": {
	I0816 21:57:19.446745   71539 command_runner.go:124] >         "value": "0"
	I0816 21:57:19.446751   71539 command_runner.go:124] >       },
	I0816 21:57:19.446774   71539 command_runner.go:124] >       "username": "",
	I0816 21:57:19.446784   71539 command_runner.go:124] >       "spec": null
	I0816 21:57:19.446794   71539 command_runner.go:124] >     },
	I0816 21:57:19.446805   71539 command_runner.go:124] >     {
	I0816 21:57:19.446818   71539 command_runner.go:124] >       "id": "adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92",
	I0816 21:57:19.446827   71539 command_runner.go:124] >       "repoTags": [
	I0816 21:57:19.446835   71539 command_runner.go:124] >         "k8s.gcr.io/kube-proxy:v1.21.3"
	I0816 21:57:19.446843   71539 command_runner.go:124] >       ],
	I0816 21:57:19.446850   71539 command_runner.go:124] >       "repoDigests": [
	I0816 21:57:19.446864   71539 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b",
	I0816 21:57:19.446879   71539 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:c7778d7b97b2a822c3fe3e921d104ac42afbd38268de8df03557465780886627"
	I0816 21:57:19.446887   71539 command_runner.go:124] >       ],
	I0816 21:57:19.446895   71539 command_runner.go:124] >       "size": "105129702",
	I0816 21:57:19.446904   71539 command_runner.go:124] >       "uid": null,
	I0816 21:57:19.446913   71539 command_runner.go:124] >       "username": "",
	I0816 21:57:19.446921   71539 command_runner.go:124] >       "spec": null
	I0816 21:57:19.446927   71539 command_runner.go:124] >     },
	I0816 21:57:19.446932   71539 command_runner.go:124] >     {
	I0816 21:57:19.446946   71539 command_runner.go:124] >       "id": "6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a",
	I0816 21:57:19.446956   71539 command_runner.go:124] >       "repoTags": [
	I0816 21:57:19.446967   71539 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler:v1.21.3"
	I0816 21:57:19.446975   71539 command_runner.go:124] >       ],
	I0816 21:57:19.446982   71539 command_runner.go:124] >       "repoDigests": [
	I0816 21:57:19.446997   71539 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4",
	I0816 21:57:19.447020   71539 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:b61779ea1bd936c137b25b3a7baa5551fbbd84fed8568d15c7c85ab1139521c0"
	I0816 21:57:19.447032   71539 command_runner.go:124] >       ],
	I0816 21:57:19.447039   71539 command_runner.go:124] >       "size": "51893338",
	I0816 21:57:19.447048   71539 command_runner.go:124] >       "uid": {
	I0816 21:57:19.447058   71539 command_runner.go:124] >         "value": "0"
	I0816 21:57:19.447066   71539 command_runner.go:124] >       },
	I0816 21:57:19.447072   71539 command_runner.go:124] >       "username": "",
	I0816 21:57:19.447081   71539 command_runner.go:124] >       "spec": null
	I0816 21:57:19.447089   71539 command_runner.go:124] >     },
	I0816 21:57:19.447095   71539 command_runner.go:124] >     {
	I0816 21:57:19.447108   71539 command_runner.go:124] >       "id": "0f8457a4c2ecaceac160805013dc3c61c63a1ff3dee74a473a36249a748e0253",
	I0816 21:57:19.447117   71539 command_runner.go:124] >       "repoTags": [
	I0816 21:57:19.447127   71539 command_runner.go:124] >         "k8s.gcr.io/pause:3.4.1"
	I0816 21:57:19.447132   71539 command_runner.go:124] >       ],
	I0816 21:57:19.447142   71539 command_runner.go:124] >       "repoDigests": [
	I0816 21:57:19.447154   71539 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810",
	I0816 21:57:19.447173   71539 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:914e745e524aa94315a25b49a7fafc0aa395e332126930593225d7a513f5a6b2"
	I0816 21:57:19.447181   71539 command_runner.go:124] >       ],
	I0816 21:57:19.447191   71539 command_runner.go:124] >       "size": "689817",
	I0816 21:57:19.447200   71539 command_runner.go:124] >       "uid": null,
	I0816 21:57:19.447207   71539 command_runner.go:124] >       "username": "",
	I0816 21:57:19.447216   71539 command_runner.go:124] >       "spec": null
	I0816 21:57:19.447223   71539 command_runner.go:124] >     }
	I0816 21:57:19.447231   71539 command_runner.go:124] >   ]
	I0816 21:57:19.447236   71539 command_runner.go:124] > }
	I0816 21:57:19.447403   71539 crio.go:424] all images are preloaded for cri-o runtime.
	I0816 21:57:19.447415   71539 cache_images.go:74] Images are preloaded, skipping loading
	I0816 21:57:19.447480   71539 ssh_runner.go:149] Run: crio config
	I0816 21:57:19.507874   71539 command_runner.go:124] > # The CRI-O configuration file specifies all of the available configuration
	I0816 21:57:19.507912   71539 command_runner.go:124] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0816 21:57:19.507923   71539 command_runner.go:124] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0816 21:57:19.507928   71539 command_runner.go:124] > #
	I0816 21:57:19.507939   71539 command_runner.go:124] > # Please refer to crio.conf(5) for details of all configuration options.
	I0816 21:57:19.507953   71539 command_runner.go:124] > # CRI-O supports partial configuration reload during runtime, which can be
	I0816 21:57:19.507960   71539 command_runner.go:124] > # done by sending SIGHUP to the running process. Currently supported options
	I0816 21:57:19.507970   71539 command_runner.go:124] > # are explicitly mentioned with: 'This option supports live configuration
	I0816 21:57:19.507974   71539 command_runner.go:124] > # reload'.
	I0816 21:57:19.507981   71539 command_runner.go:124] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0816 21:57:19.507995   71539 command_runner.go:124] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0816 21:57:19.508008   71539 command_runner.go:124] > # you want to change the system's defaults. If you want to modify storage just
	I0816 21:57:19.508019   71539 command_runner.go:124] > # for CRI-O, you can change the storage configuration options here.
	I0816 21:57:19.508028   71539 command_runner.go:124] > [crio]
	I0816 21:57:19.508037   71539 command_runner.go:124] > # Path to the "root directory". CRI-O stores all of its data, including
	I0816 21:57:19.508054   71539 command_runner.go:124] > # containers images, in this directory.
	I0816 21:57:19.508064   71539 command_runner.go:124] > #root = "/var/lib/containers/storage"
	I0816 21:57:19.508086   71539 command_runner.go:124] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0816 21:57:19.508096   71539 command_runner.go:124] > #runroot = "/run/containers/storage"
	I0816 21:57:19.508107   71539 command_runner.go:124] > # Storage driver used to manage the storage of images and containers. Please
	I0816 21:57:19.508117   71539 command_runner.go:124] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0816 21:57:19.508125   71539 command_runner.go:124] > #storage_driver = "overlay"
	I0816 21:57:19.508134   71539 command_runner.go:124] > # List to pass options to the storage driver. Please refer to
	I0816 21:57:19.508143   71539 command_runner.go:124] > # containers-storage.conf(5) to see all available storage options.
	I0816 21:57:19.508148   71539 command_runner.go:124] > #storage_option = [
	I0816 21:57:19.508152   71539 command_runner.go:124] > #	"overlay.mountopt=nodev",
	I0816 21:57:19.508155   71539 command_runner.go:124] > #]
	I0816 21:57:19.508162   71539 command_runner.go:124] > # The default log directory where all logs will go unless directly specified by
	I0816 21:57:19.508175   71539 command_runner.go:124] > # the kubelet. The log directory specified must be an absolute directory.
	I0816 21:57:19.508186   71539 command_runner.go:124] > log_dir = "/var/log/crio/pods"
	I0816 21:57:19.508195   71539 command_runner.go:124] > # Location for CRI-O to lay down the temporary version file.
	I0816 21:57:19.508210   71539 command_runner.go:124] > # It is used to check if crio wipe should wipe containers, which should
	I0816 21:57:19.508221   71539 command_runner.go:124] > # always happen on a node reboot
	I0816 21:57:19.508229   71539 command_runner.go:124] > version_file = "/var/run/crio/version"
	I0816 21:57:19.508238   71539 command_runner.go:124] > # Location for CRI-O to lay down the persistent version file.
	I0816 21:57:19.508251   71539 command_runner.go:124] > # It is used to check if crio wipe should wipe images, which should
	I0816 21:57:19.508262   71539 command_runner.go:124] > # only happen when CRI-O has been upgraded
	I0816 21:57:19.508277   71539 command_runner.go:124] > version_file_persist = "/var/lib/crio/version"
	I0816 21:57:19.508290   71539 command_runner.go:124] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0816 21:57:19.508295   71539 command_runner.go:124] > [crio.api]
	I0816 21:57:19.508303   71539 command_runner.go:124] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0816 21:57:19.508309   71539 command_runner.go:124] > listen = "/var/run/crio/crio.sock"
	I0816 21:57:19.508318   71539 command_runner.go:124] > # IP address on which the stream server will listen.
	I0816 21:57:19.508329   71539 command_runner.go:124] > stream_address = "127.0.0.1"
	I0816 21:57:19.508339   71539 command_runner.go:124] > # The port on which the stream server will listen. If the port is set to "0", then
	I0816 21:57:19.508351   71539 command_runner.go:124] > # CRI-O will allocate a random free port number.
	I0816 21:57:19.508360   71539 command_runner.go:124] > stream_port = "0"
	I0816 21:57:19.508368   71539 command_runner.go:124] > # Enable encrypted TLS transport of the stream server.
	I0816 21:57:19.508381   71539 command_runner.go:124] > stream_enable_tls = false
	I0816 21:57:19.508394   71539 command_runner.go:124] > # Length of time until open streams terminate due to lack of activity
	I0816 21:57:19.508403   71539 command_runner.go:124] > stream_idle_timeout = ""
	I0816 21:57:19.508413   71539 command_runner.go:124] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0816 21:57:19.508426   71539 command_runner.go:124] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0816 21:57:19.508435   71539 command_runner.go:124] > # minutes.
	I0816 21:57:19.508472   71539 command_runner.go:124] > stream_tls_cert = ""
	I0816 21:57:19.508489   71539 command_runner.go:124] > # Path to the key file used to serve the encrypted stream. This file can
	I0816 21:57:19.508500   71539 command_runner.go:124] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0816 21:57:19.508518   71539 command_runner.go:124] > stream_tls_key = ""
	I0816 21:57:19.508531   71539 command_runner.go:124] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0816 21:57:19.508546   71539 command_runner.go:124] > # communication with the encrypted stream. This file can change and CRI-O will
	I0816 21:57:19.508558   71539 command_runner.go:124] > # automatically pick up the changes within 5 minutes.
	I0816 21:57:19.508565   71539 command_runner.go:124] > stream_tls_ca = ""
	I0816 21:57:19.508584   71539 command_runner.go:124] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0816 21:57:19.508595   71539 command_runner.go:124] > grpc_max_send_msg_size = 16777216
	I0816 21:57:19.508606   71539 command_runner.go:124] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0816 21:57:19.508617   71539 command_runner.go:124] > grpc_max_recv_msg_size = 16777216
	I0816 21:57:19.508630   71539 command_runner.go:124] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0816 21:57:19.508645   71539 command_runner.go:124] > # and options for how to set up and manage the OCI runtime.
	I0816 21:57:19.508654   71539 command_runner.go:124] > [crio.runtime]
	I0816 21:57:19.508664   71539 command_runner.go:124] > # A list of ulimits to be set in containers by default, specified as
	I0816 21:57:19.508676   71539 command_runner.go:124] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0816 21:57:19.508686   71539 command_runner.go:124] > # "nofile=1024:2048"
	I0816 21:57:19.508699   71539 command_runner.go:124] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0816 21:57:19.508709   71539 command_runner.go:124] > #default_ulimits = [
	I0816 21:57:19.508716   71539 command_runner.go:124] > #]
	I0816 21:57:19.508726   71539 command_runner.go:124] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0816 21:57:19.508735   71539 command_runner.go:124] > no_pivot = false
	I0816 21:57:19.508746   71539 command_runner.go:124] > # decryption_keys_path is the path where the keys required for
	I0816 21:57:19.508771   71539 command_runner.go:124] > # image decryption are stored. This option supports live configuration reload.
	I0816 21:57:19.508782   71539 command_runner.go:124] > decryption_keys_path = "/etc/crio/keys/"
	I0816 21:57:19.508792   71539 command_runner.go:124] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0816 21:57:19.508803   71539 command_runner.go:124] > # Will be searched for using $PATH if empty.
	I0816 21:57:19.508811   71539 command_runner.go:124] > conmon = ""
	I0816 21:57:19.508821   71539 command_runner.go:124] > # Cgroup setting for conmon
	I0816 21:57:19.508831   71539 command_runner.go:124] > conmon_cgroup = "system.slice"
	I0816 21:57:19.508844   71539 command_runner.go:124] > # Environment variable list for the conmon process, used for passing necessary
	I0816 21:57:19.508853   71539 command_runner.go:124] > # environment variables to conmon or the runtime.
	I0816 21:57:19.508863   71539 command_runner.go:124] > conmon_env = [
	I0816 21:57:19.508876   71539 command_runner.go:124] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0816 21:57:19.508885   71539 command_runner.go:124] > ]
	I0816 21:57:19.508894   71539 command_runner.go:124] > # Additional environment variables to set for all the
	I0816 21:57:19.508905   71539 command_runner.go:124] > # containers. These are overridden if set in the
	I0816 21:57:19.508918   71539 command_runner.go:124] > # container image spec or in the container runtime configuration.
	I0816 21:57:19.508927   71539 command_runner.go:124] > default_env = [
	I0816 21:57:19.508935   71539 command_runner.go:124] > ]
	I0816 21:57:19.508945   71539 command_runner.go:124] > # If true, SELinux will be used for pod separation on the host.
	I0816 21:57:19.508956   71539 command_runner.go:124] > selinux = false
	I0816 21:57:19.508970   71539 command_runner.go:124] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0816 21:57:19.508992   71539 command_runner.go:124] > # for the runtime. If not specified, then the internal default seccomp profile
	I0816 21:57:19.509004   71539 command_runner.go:124] > # will be used. This option supports live configuration reload.
	I0816 21:57:19.509013   71539 command_runner.go:124] > seccomp_profile = ""
	I0816 21:57:19.509022   71539 command_runner.go:124] > # Changes the meaning of an empty seccomp profile. By default
	I0816 21:57:19.509033   71539 command_runner.go:124] > # (and according to CRI spec), an empty profile means unconfined.
	I0816 21:57:19.509046   71539 command_runner.go:124] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0816 21:57:19.509056   71539 command_runner.go:124] > # which might increase security.
	I0816 21:57:19.509067   71539 command_runner.go:124] > seccomp_use_default_when_empty = false
	I0816 21:57:19.509084   71539 command_runner.go:124] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0816 21:57:19.509097   71539 command_runner.go:124] > # profile name is "crio-default". This profile only takes effect if the user
	I0816 21:57:19.509109   71539 command_runner.go:124] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0816 21:57:19.509121   71539 command_runner.go:124] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0816 21:57:19.509132   71539 command_runner.go:124] > # This option supports live configuration reload.
	I0816 21:57:19.509143   71539 command_runner.go:124] > apparmor_profile = "crio-default"
	I0816 21:57:19.509157   71539 command_runner.go:124] > # Used to change irqbalance service config file path which is used for configuring
	I0816 21:57:19.509164   71539 command_runner.go:124] > # irqbalance daemon.
	I0816 21:57:19.509175   71539 command_runner.go:124] > irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0816 21:57:19.509187   71539 command_runner.go:124] > # Cgroup management implementation used for the runtime.
	I0816 21:57:19.509196   71539 command_runner.go:124] > cgroup_manager = "systemd"
	I0816 21:57:19.509206   71539 command_runner.go:124] > # Specify whether the image pull must be performed in a separate cgroup.
	I0816 21:57:19.509213   71539 command_runner.go:124] > separate_pull_cgroup = ""
	I0816 21:57:19.509220   71539 command_runner.go:124] > # List of default capabilities for containers. If it is empty or commented out,
	I0816 21:57:19.509229   71539 command_runner.go:124] > # only the capabilities defined in the containers json file by the user/kube
	I0816 21:57:19.509235   71539 command_runner.go:124] > # will be added.
	I0816 21:57:19.509239   71539 command_runner.go:124] > default_capabilities = [
	I0816 21:57:19.509245   71539 command_runner.go:124] > 	"CHOWN",
	I0816 21:57:19.509249   71539 command_runner.go:124] > 	"DAC_OVERRIDE",
	I0816 21:57:19.509255   71539 command_runner.go:124] > 	"FSETID",
	I0816 21:57:19.509262   71539 command_runner.go:124] > 	"FOWNER",
	I0816 21:57:19.509268   71539 command_runner.go:124] > 	"SETGID",
	I0816 21:57:19.509271   71539 command_runner.go:124] > 	"SETUID",
	I0816 21:57:19.509277   71539 command_runner.go:124] > 	"SETPCAP",
	I0816 21:57:19.509281   71539 command_runner.go:124] > 	"NET_BIND_SERVICE",
	I0816 21:57:19.509287   71539 command_runner.go:124] > 	"KILL",
	I0816 21:57:19.509290   71539 command_runner.go:124] > ]
	I0816 21:57:19.509299   71539 command_runner.go:124] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0816 21:57:19.509308   71539 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0816 21:57:19.509315   71539 command_runner.go:124] > default_sysctls = [
	I0816 21:57:19.509324   71539 command_runner.go:124] > ]
	I0816 21:57:19.509333   71539 command_runner.go:124] > # List of additional devices. specified as
	I0816 21:57:19.509340   71539 command_runner.go:124] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0816 21:57:19.509348   71539 command_runner.go:124] > #If it is empty or commented out, only the devices
	I0816 21:57:19.509359   71539 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0816 21:57:19.509366   71539 command_runner.go:124] > additional_devices = [
	I0816 21:57:19.509369   71539 command_runner.go:124] > ]
	I0816 21:57:19.509396   71539 command_runner.go:124] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0816 21:57:19.509405   71539 command_runner.go:124] > # directories does not exist, then CRI-O will automatically skip them.
	I0816 21:57:19.509412   71539 command_runner.go:124] > hooks_dir = [
	I0816 21:57:19.509419   71539 command_runner.go:124] > 	"/usr/share/containers/oci/hooks.d",
	I0816 21:57:19.509423   71539 command_runner.go:124] > ]
	I0816 21:57:19.509428   71539 command_runner.go:124] > # Path to the file specifying the defaults mounts for each container. The
	I0816 21:57:19.509437   71539 command_runner.go:124] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0816 21:57:19.509442   71539 command_runner.go:124] > # its default mounts from the following two files:
	I0816 21:57:19.509449   71539 command_runner.go:124] > #
	I0816 21:57:19.509455   71539 command_runner.go:124] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0816 21:57:19.509465   71539 command_runner.go:124] > #      override file, where users can either add in their own default mounts, or
	I0816 21:57:19.509473   71539 command_runner.go:124] > #      override the default mounts shipped with the package.
	I0816 21:57:19.509479   71539 command_runner.go:124] > #
	I0816 21:57:19.509485   71539 command_runner.go:124] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0816 21:57:19.509495   71539 command_runner.go:124] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0816 21:57:19.509508   71539 command_runner.go:124] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0816 21:57:19.509517   71539 command_runner.go:124] > #      only add mounts it finds in this file.
	I0816 21:57:19.509523   71539 command_runner.go:124] > #
	I0816 21:57:19.509528   71539 command_runner.go:124] > #default_mounts_file = ""
	I0816 21:57:19.509535   71539 command_runner.go:124] > # Maximum number of processes allowed in a container.
	I0816 21:57:19.509543   71539 command_runner.go:124] > pids_limit = 1024
	I0816 21:57:19.509549   71539 command_runner.go:124] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0816 21:57:19.509560   71539 command_runner.go:124] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0816 21:57:19.509569   71539 command_runner.go:124] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0816 21:57:19.509575   71539 command_runner.go:124] > # limit is never exceeded.
	I0816 21:57:19.509579   71539 command_runner.go:124] > log_size_max = -1
	I0816 21:57:19.509602   71539 command_runner.go:124] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0816 21:57:19.509609   71539 command_runner.go:124] > log_to_journald = false
	I0816 21:57:19.509615   71539 command_runner.go:124] > # Path to directory in which container exit files are written to by conmon.
	I0816 21:57:19.509623   71539 command_runner.go:124] > container_exits_dir = "/var/run/crio/exits"
	I0816 21:57:19.509628   71539 command_runner.go:124] > # Path to directory for container attach sockets.
	I0816 21:57:19.509635   71539 command_runner.go:124] > container_attach_socket_dir = "/var/run/crio"
	I0816 21:57:19.509643   71539 command_runner.go:124] > # The prefix to use for the source of the bind mounts.
	I0816 21:57:19.509649   71539 command_runner.go:124] > bind_mount_prefix = ""
	I0816 21:57:19.509655   71539 command_runner.go:124] > # If set to true, all containers will run in read-only mode.
	I0816 21:57:19.509661   71539 command_runner.go:124] > read_only = false
	I0816 21:57:19.509668   71539 command_runner.go:124] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0816 21:57:19.509676   71539 command_runner.go:124] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0816 21:57:19.509683   71539 command_runner.go:124] > # live configuration reload.
	I0816 21:57:19.509687   71539 command_runner.go:124] > log_level = "info"
	I0816 21:57:19.509692   71539 command_runner.go:124] > # Filter the log messages by the provided regular expression.
	I0816 21:57:19.509703   71539 command_runner.go:124] > # This option supports live configuration reload.
	I0816 21:57:19.509710   71539 command_runner.go:124] > log_filter = ""
	I0816 21:57:19.509716   71539 command_runner.go:124] > # The UID mappings for the user namespace of each container. A range is
	I0816 21:57:19.509726   71539 command_runner.go:124] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0816 21:57:19.509733   71539 command_runner.go:124] > # separated by comma.
	I0816 21:57:19.509737   71539 command_runner.go:124] > uid_mappings = ""
	I0816 21:57:19.509746   71539 command_runner.go:124] > # The GID mappings for the user namespace of each container. A range is
	I0816 21:57:19.509755   71539 command_runner.go:124] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0816 21:57:19.509761   71539 command_runner.go:124] > # separated by comma.
	I0816 21:57:19.509767   71539 command_runner.go:124] > gid_mappings = ""
	I0816 21:57:19.509775   71539 command_runner.go:124] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0816 21:57:19.509781   71539 command_runner.go:124] > # regarding the proper termination of the container. The lowest possible
	I0816 21:57:19.509790   71539 command_runner.go:124] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0816 21:57:19.509796   71539 command_runner.go:124] > ctr_stop_timeout = 30
	I0816 21:57:19.509803   71539 command_runner.go:124] > # manage_ns_lifecycle determines whether we pin and remove namespaces
	I0816 21:57:19.509810   71539 command_runner.go:124] > # and manage their lifecycle.
	I0816 21:57:19.509816   71539 command_runner.go:124] > # This option is being deprecated, and will be unconditionally true in the future.
	I0816 21:57:19.509823   71539 command_runner.go:124] > manage_ns_lifecycle = true
	I0816 21:57:19.509829   71539 command_runner.go:124] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0816 21:57:19.509837   71539 command_runner.go:124] > # when a pod does not have a private PID namespace, and does not use
	I0816 21:57:19.509846   71539 command_runner.go:124] > # a kernel separating runtime (like kata).
	I0816 21:57:19.509854   71539 command_runner.go:124] > # It requires manage_ns_lifecycle to be true.
	I0816 21:57:19.509858   71539 command_runner.go:124] > drop_infra_ctr = false
	I0816 21:57:19.509864   71539 command_runner.go:124] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0816 21:57:19.509874   71539 command_runner.go:124] > # You can use linux CPU list format to specify desired CPUs.
	I0816 21:57:19.509884   71539 command_runner.go:124] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0816 21:57:19.509891   71539 command_runner.go:124] > # infra_ctr_cpuset = ""
	I0816 21:57:19.509897   71539 command_runner.go:124] > # The directory where the state of the managed namespaces gets tracked.
	I0816 21:57:19.509906   71539 command_runner.go:124] > # Only used when manage_ns_lifecycle is true.
	I0816 21:57:19.509913   71539 command_runner.go:124] > namespaces_dir = "/var/run"
	I0816 21:57:19.509923   71539 command_runner.go:124] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0816 21:57:19.509930   71539 command_runner.go:124] > pinns_path = ""
	I0816 21:57:19.509939   71539 command_runner.go:124] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0816 21:57:19.509948   71539 command_runner.go:124] > # The name is matched against the runtimes map below. If this value is changed,
	I0816 21:57:19.509957   71539 command_runner.go:124] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0816 21:57:19.509964   71539 command_runner.go:124] > default_runtime = "runc"
	I0816 21:57:19.509970   71539 command_runner.go:124] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0816 21:57:19.509997   71539 command_runner.go:124] > # The runtime to use is picked based on the runtime_handler provided by the CRI.
	I0816 21:57:19.510008   71539 command_runner.go:124] > # If no runtime_handler is provided, the runtime will be picked based on the level
	I0816 21:57:19.510021   71539 command_runner.go:124] > # of trust of the workload. Each entry in the table should follow the format:
	I0816 21:57:19.510024   71539 command_runner.go:124] > #
	I0816 21:57:19.510031   71539 command_runner.go:124] > #[crio.runtime.runtimes.runtime-handler]
	I0816 21:57:19.510041   71539 command_runner.go:124] > #  runtime_path = "/path/to/the/executable"
	I0816 21:57:19.510047   71539 command_runner.go:124] > #  runtime_type = "oci"
	I0816 21:57:19.510057   71539 command_runner.go:124] > #  runtime_root = "/path/to/the/root"
	I0816 21:57:19.510067   71539 command_runner.go:124] > #  privileged_without_host_devices = false
	I0816 21:57:19.510076   71539 command_runner.go:124] > #  allowed_annotations = []
	I0816 21:57:19.510079   71539 command_runner.go:124] > # Where:
	I0816 21:57:19.510085   71539 command_runner.go:124] > # - runtime-handler: name used to identify the runtime
	I0816 21:57:19.510094   71539 command_runner.go:124] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0816 21:57:19.510103   71539 command_runner.go:124] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0816 21:57:19.510113   71539 command_runner.go:124] > #   the runtime executable name, and the runtime executable should be placed
	I0816 21:57:19.510120   71539 command_runner.go:124] > #   in $PATH.
	I0816 21:57:19.510127   71539 command_runner.go:124] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0816 21:57:19.510134   71539 command_runner.go:124] > #   omitted, an "oci" runtime is assumed.
	I0816 21:57:19.510141   71539 command_runner.go:124] > # - runtime_root (optional, string): root directory for storage of containers
	I0816 21:57:19.510147   71539 command_runner.go:124] > #   state.
	I0816 21:57:19.510153   71539 command_runner.go:124] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0816 21:57:19.510162   71539 command_runner.go:124] > #   host devices from being passed to privileged containers.
	I0816 21:57:19.510174   71539 command_runner.go:124] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0816 21:57:19.510185   71539 command_runner.go:124] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0816 21:57:19.510190   71539 command_runner.go:124] > #   The currently recognized values are:
	I0816 21:57:19.510199   71539 command_runner.go:124] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0816 21:57:19.510208   71539 command_runner.go:124] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0816 21:57:19.510217   71539 command_runner.go:124] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0816 21:57:19.510223   71539 command_runner.go:124] > [crio.runtime.runtimes.runc]
	I0816 21:57:19.510228   71539 command_runner.go:124] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0816 21:57:19.510236   71539 command_runner.go:124] > runtime_type = "oci"
	I0816 21:57:19.510243   71539 command_runner.go:124] > runtime_root = "/run/runc"
	I0816 21:57:19.510249   71539 command_runner.go:124] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0816 21:57:19.510256   71539 command_runner.go:124] > # running containers
	I0816 21:57:19.510260   71539 command_runner.go:124] > #[crio.runtime.runtimes.crun]
	I0816 21:57:19.510269   71539 command_runner.go:124] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0816 21:57:19.510276   71539 command_runner.go:124] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0816 21:57:19.510284   71539 command_runner.go:124] > # surface and mitigating the consequences of containers breakout.
	I0816 21:57:19.510293   71539 command_runner.go:124] > # Kata Containers with the default configured VMM
	I0816 21:57:19.510301   71539 command_runner.go:124] > #[crio.runtime.runtimes.kata-runtime]
	I0816 21:57:19.510306   71539 command_runner.go:124] > # Kata Containers with the QEMU VMM
	I0816 21:57:19.510317   71539 command_runner.go:124] > #[crio.runtime.runtimes.kata-qemu]
	I0816 21:57:19.510325   71539 command_runner.go:124] > # Kata Containers with the Firecracker VMM
	I0816 21:57:19.510329   71539 command_runner.go:124] > #[crio.runtime.runtimes.kata-fc]
	I0816 21:57:19.510338   71539 command_runner.go:124] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0816 21:57:19.510344   71539 command_runner.go:124] > #
	I0816 21:57:19.510350   71539 command_runner.go:124] > # CRI-O reads its configured registries defaults from the system wide
	I0816 21:57:19.510360   71539 command_runner.go:124] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0816 21:57:19.510369   71539 command_runner.go:124] > # you want to modify just CRI-O, you can change the registries configuration in
	I0816 21:57:19.510377   71539 command_runner.go:124] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0816 21:57:19.510385   71539 command_runner.go:124] > # use the system's defaults from /etc/containers/registries.conf.
	I0816 21:57:19.510391   71539 command_runner.go:124] > [crio.image]
	I0816 21:57:19.510397   71539 command_runner.go:124] > # Default transport for pulling images from a remote container storage.
	I0816 21:57:19.510404   71539 command_runner.go:124] > default_transport = "docker://"
	I0816 21:57:19.510411   71539 command_runner.go:124] > # The path to a file containing credentials necessary for pulling images from
	I0816 21:57:19.510420   71539 command_runner.go:124] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0816 21:57:19.510426   71539 command_runner.go:124] > global_auth_file = ""
	I0816 21:57:19.510432   71539 command_runner.go:124] > # The image used to instantiate infra containers.
	I0816 21:57:19.510439   71539 command_runner.go:124] > # This option supports live configuration reload.
	I0816 21:57:19.510444   71539 command_runner.go:124] > pause_image = "k8s.gcr.io/pause:3.4.1"
	I0816 21:57:19.510453   71539 command_runner.go:124] > # The path to a file containing credentials specific for pulling the pause_image from
	I0816 21:57:19.510466   71539 command_runner.go:124] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0816 21:57:19.510475   71539 command_runner.go:124] > # This option supports live configuration reload.
	I0816 21:57:19.510482   71539 command_runner.go:124] > pause_image_auth_file = ""
	I0816 21:57:19.510488   71539 command_runner.go:124] > # The command to run to have a container stay in the paused state.
	I0816 21:57:19.510497   71539 command_runner.go:124] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0816 21:57:19.510503   71539 command_runner.go:124] > # specified in the pause image. When commented out, it will fallback to the
	I0816 21:57:19.510517   71539 command_runner.go:124] > # default: "/pause". This option supports live configuration reload.
	I0816 21:57:19.510522   71539 command_runner.go:124] > pause_command = "/pause"
	I0816 21:57:19.510531   71539 command_runner.go:124] > # Path to the file which decides what sort of policy we use when deciding
	I0816 21:57:19.510539   71539 command_runner.go:124] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0816 21:57:19.510548   71539 command_runner.go:124] > # this option be used, as the default behavior of using the system-wide default
	I0816 21:57:19.510557   71539 command_runner.go:124] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0816 21:57:19.510565   71539 command_runner.go:124] > # refer to containers-policy.json(5) for more details.
	I0816 21:57:19.510571   71539 command_runner.go:124] > signature_policy = ""
	I0816 21:57:19.510595   71539 command_runner.go:124] > # List of registries to skip TLS verification for pulling images. Please
	I0816 21:57:19.510604   71539 command_runner.go:124] > # consider configuring the registries via /etc/containers/registries.conf before
	I0816 21:57:19.510609   71539 command_runner.go:124] > # changing them here.
	I0816 21:57:19.510613   71539 command_runner.go:124] > #insecure_registries = "[]"
	I0816 21:57:19.510620   71539 command_runner.go:124] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0816 21:57:19.510633   71539 command_runner.go:124] > # ignore; the latter will ignore volumes entirely.
	I0816 21:57:19.510643   71539 command_runner.go:124] > image_volumes = "mkdir"
	I0816 21:57:19.510657   71539 command_runner.go:124] > # List of registries to be used when pulling an unqualified image (e.g.,
	I0816 21:57:19.510671   71539 command_runner.go:124] > # "alpine:latest"). By default, registries is set to "docker.io" for
	I0816 21:57:19.510682   71539 command_runner.go:124] > # compatibility reasons. Depending on your workload and usecase you may add more
	I0816 21:57:19.510690   71539 command_runner.go:124] > # registries (e.g., "quay.io", "registry.fedoraproject.org",
	I0816 21:57:19.510694   71539 command_runner.go:124] > # "registry.opensuse.org", etc.).
	I0816 21:57:19.510702   71539 command_runner.go:124] > #registries = [
	I0816 21:57:19.510705   71539 command_runner.go:124] > # ]
	I0816 21:57:19.510713   71539 command_runner.go:124] > # Temporary directory to use for storing big files
	I0816 21:57:19.510718   71539 command_runner.go:124] > big_files_temporary_dir = ""
	I0816 21:57:19.510727   71539 command_runner.go:124] > # The crio.network table containers settings pertaining to the management of
	I0816 21:57:19.510733   71539 command_runner.go:124] > # CNI plugins.
	I0816 21:57:19.510737   71539 command_runner.go:124] > [crio.network]
	I0816 21:57:19.510745   71539 command_runner.go:124] > # The default CNI network name to be selected. If not set or "", then
	I0816 21:57:19.510753   71539 command_runner.go:124] > # CRI-O will pick-up the first one found in network_dir.
	I0816 21:57:19.510760   71539 command_runner.go:124] > # cni_default_network = "kindnet"
	I0816 21:57:19.510767   71539 command_runner.go:124] > # Path to the directory where CNI configuration files are located.
	I0816 21:57:19.510773   71539 command_runner.go:124] > network_dir = "/etc/cni/net.d/"
	I0816 21:57:19.510779   71539 command_runner.go:124] > # Paths to directories where CNI plugin binaries are located.
	I0816 21:57:19.510787   71539 command_runner.go:124] > plugin_dirs = [
	I0816 21:57:19.510794   71539 command_runner.go:124] > 	"/opt/cni/bin/",
	I0816 21:57:19.510797   71539 command_runner.go:124] > ]
	I0816 21:57:19.510807   71539 command_runner.go:124] > # A necessary configuration for Prometheus based metrics retrieval
	I0816 21:57:19.510814   71539 command_runner.go:124] > [crio.metrics]
	I0816 21:57:19.510820   71539 command_runner.go:124] > # Globally enable or disable metrics support.
	I0816 21:57:19.510828   71539 command_runner.go:124] > enable_metrics = false
	I0816 21:57:19.510837   71539 command_runner.go:124] > # The port on which the metrics server will listen.
	I0816 21:57:19.510845   71539 command_runner.go:124] > metrics_port = 9090
	I0816 21:57:19.510899   71539 command_runner.go:124] > # Local socket path to bind the metrics server to
	I0816 21:57:19.510908   71539 command_runner.go:124] > metrics_socket = ""
	I0816 21:57:19.512172   71539 command_runner.go:124] ! time="2021-08-16T21:57:19Z" level=info msg="Starting CRI-O, version: 1.20.3, git: 50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d(clean)"
	I0816 21:57:19.512201   71539 command_runner.go:124] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0816 21:57:19.512349   71539 cni.go:93] Creating CNI manager for ""
	I0816 21:57:19.512361   71539 cni.go:154] 1 nodes found, recommending kindnet
	I0816 21:57:19.512369   71539 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0816 21:57:19.512381   71539 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20210816215712-6487 NodeName:multinode-20210816215712-6487 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/li
b/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 21:57:19.512502   71539 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "multinode-20210816215712-6487"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 21:57:19.512595   71539 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-20210816215712-6487 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:multinode-20210816215712-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0816 21:57:19.512639   71539 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0816 21:57:19.518500   71539 command_runner.go:124] > kubeadm
	I0816 21:57:19.518519   71539 command_runner.go:124] > kubectl
	I0816 21:57:19.518523   71539 command_runner.go:124] > kubelet
	I0816 21:57:19.519059   71539 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 21:57:19.519117   71539 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 21:57:19.525266   71539 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (560 bytes)
	I0816 21:57:19.536602   71539 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 21:57:19.547618   71539 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2070 bytes)
	I0816 21:57:19.558412   71539 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0816 21:57:19.560964   71539 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 21:57:19.568794   71539 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487 for IP: 192.168.49.2
	I0816 21:57:19.568834   71539 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 21:57:19.568855   71539 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 21:57:19.568897   71539 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/client.key
	I0816 21:57:19.568909   71539 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/client.crt with IP's: []
	I0816 21:57:19.648871   71539 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/client.crt ...
	I0816 21:57:19.648894   71539 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/client.crt: {Name:mkc2ee63f87966cb688347e742e32365d1fc6853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 21:57:19.649095   71539 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/client.key ...
	I0816 21:57:19.649112   71539 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/client.key: {Name:mkb5cca9224aa7ed0724bae8889f7db87edd93c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 21:57:19.649216   71539 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/apiserver.key.dd3b5fb2
	I0816 21:57:19.649232   71539 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0816 21:57:19.901793   71539 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/apiserver.crt.dd3b5fb2 ...
	I0816 21:57:19.901823   71539 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/apiserver.crt.dd3b5fb2: {Name:mkc8e42523721074d4ca6a196cf27fd1c59984c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 21:57:19.902021   71539 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/apiserver.key.dd3b5fb2 ...
	I0816 21:57:19.902037   71539 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/apiserver.key.dd3b5fb2: {Name:mkfc410e296f84a1765bc45f0715ba687e9d83e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 21:57:19.902144   71539 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/apiserver.crt
	I0816 21:57:19.902221   71539 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/apiserver.key
	I0816 21:57:19.902285   71539 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/proxy-client.key
	I0816 21:57:19.902297   71539 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/proxy-client.crt with IP's: []
	I0816 21:57:20.017504   71539 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/proxy-client.crt ...
	I0816 21:57:20.017537   71539 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/proxy-client.crt: {Name:mkdc6e23482bbb499b4d0b7af5ec579c6c5d9d88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 21:57:20.017729   71539 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/proxy-client.key ...
	I0816 21:57:20.017745   71539 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/proxy-client.key: {Name:mk04b48d97d341d1a3457a3e779fdb7944bdd6c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 21:57:20.017843   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 21:57:20.017860   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 21:57:20.017869   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 21:57:20.017881   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 21:57:20.017891   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 21:57:20.017902   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 21:57:20.017912   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 21:57:20.017923   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 21:57:20.017966   71539 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem (1338 bytes)
	W0816 21:57:20.018002   71539 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487_empty.pem, impossibly tiny 0 bytes
	I0816 21:57:20.018009   71539 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 21:57:20.018030   71539 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 21:57:20.018055   71539 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 21:57:20.018074   71539 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0816 21:57:20.018119   71539 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem (1708 bytes)
	I0816 21:57:20.018146   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 21:57:20.018160   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem -> /usr/share/ca-certificates/6487.pem
	I0816 21:57:20.018168   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem -> /usr/share/ca-certificates/64872.pem
	I0816 21:57:20.018995   71539 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 21:57:20.035397   71539 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 21:57:20.128417   71539 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 21:57:20.143841   71539 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 21:57:20.158892   71539 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 21:57:20.173587   71539 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 21:57:20.188604   71539 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 21:57:20.203363   71539 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 21:57:20.218142   71539 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 21:57:20.232998   71539 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem --> /usr/share/ca-certificates/6487.pem (1338 bytes)
	I0816 21:57:20.247709   71539 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /usr/share/ca-certificates/64872.pem (1708 bytes)
	I0816 21:57:20.262514   71539 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 21:57:20.273326   71539 ssh_runner.go:149] Run: openssl version
	I0816 21:57:20.277402   71539 command_runner.go:124] > OpenSSL 1.1.1f  31 Mar 2020
	I0816 21:57:20.277575   71539 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 21:57:20.283943   71539 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 21:57:20.286648   71539 command_runner.go:124] > -rw-r--r-- 1 root root 1111 Aug 16 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0816 21:57:20.286672   71539 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0816 21:57:20.286701   71539 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 21:57:20.290794   71539 command_runner.go:124] > b5213941
	I0816 21:57:20.290940   71539 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 21:57:20.297834   71539 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6487.pem && ln -fs /usr/share/ca-certificates/6487.pem /etc/ssl/certs/6487.pem"
	I0816 21:57:20.304176   71539 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6487.pem
	I0816 21:57:20.306776   71539 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Aug 16 21:50 /usr/share/ca-certificates/6487.pem
	I0816 21:57:20.306798   71539 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:50 /usr/share/ca-certificates/6487.pem
	I0816 21:57:20.306829   71539 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6487.pem
	I0816 21:57:20.311012   71539 command_runner.go:124] > 51391683
	I0816 21:57:20.311158   71539 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6487.pem /etc/ssl/certs/51391683.0"
	I0816 21:57:20.317685   71539 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/64872.pem && ln -fs /usr/share/ca-certificates/64872.pem /etc/ssl/certs/64872.pem"
	I0816 21:57:20.324063   71539 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/64872.pem
	I0816 21:57:20.326729   71539 command_runner.go:124] > -rw-r--r-- 1 root root 1708 Aug 16 21:50 /usr/share/ca-certificates/64872.pem
	I0816 21:57:20.326788   71539 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:50 /usr/share/ca-certificates/64872.pem
	I0816 21:57:20.326880   71539 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/64872.pem
	I0816 21:57:20.331078   71539 command_runner.go:124] > 3ec20f2e
	I0816 21:57:20.331253   71539 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/64872.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 21:57:20.337694   71539 kubeadm.go:390] StartCluster: {Name:multinode-20210816215712-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210816215712-6487 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0816 21:57:20.337773   71539 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 21:57:20.337813   71539 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 21:57:20.360402   71539 cri.go:76] found id: ""
	I0816 21:57:20.360445   71539 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 21:57:20.366117   71539 command_runner.go:124] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0816 21:57:20.366167   71539 command_runner.go:124] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0816 21:57:20.366187   71539 command_runner.go:124] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0816 21:57:20.366812   71539 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 21:57:20.372888   71539 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0816 21:57:20.372929   71539 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 21:57:20.378988   71539 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0816 21:57:20.379009   71539 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0816 21:57:20.379017   71539 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0816 21:57:20.379025   71539 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 21:57:20.379053   71539 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 21:57:20.379087   71539 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0816 21:57:20.429420   71539 command_runner.go:124] > [init] Using Kubernetes version: v1.21.3
	I0816 21:57:20.429483   71539 command_runner.go:124] > [preflight] Running pre-flight checks
	I0816 21:57:20.455146   71539 command_runner.go:124] > [preflight] The system verification failed. Printing the output from the verification:
	I0816 21:57:20.455227   71539 command_runner.go:124] > KERNEL_VERSION: 4.9.0-16-amd64
	I0816 21:57:20.455278   71539 command_runner.go:124] > OS: Linux
	I0816 21:57:20.455369   71539 command_runner.go:124] > CGROUPS_CPU: enabled
	I0816 21:57:20.455455   71539 command_runner.go:124] > CGROUPS_CPUACCT: enabled
	I0816 21:57:20.455506   71539 command_runner.go:124] > CGROUPS_CPUSET: enabled
	I0816 21:57:20.455549   71539 command_runner.go:124] > CGROUPS_DEVICES: enabled
	I0816 21:57:20.455592   71539 command_runner.go:124] > CGROUPS_FREEZER: enabled
	I0816 21:57:20.455643   71539 command_runner.go:124] > CGROUPS_MEMORY: enabled
	I0816 21:57:20.455683   71539 command_runner.go:124] > CGROUPS_PIDS: enabled
	I0816 21:57:20.455724   71539 command_runner.go:124] > CGROUPS_HUGETLB: missing
	I0816 21:57:20.523041   71539 command_runner.go:124] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 21:57:20.523151   71539 command_runner.go:124] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 21:57:20.523277   71539 command_runner.go:124] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 21:57:20.642111   71539 out.go:204]   - Generating certificates and keys ...
	I0816 21:57:20.638321   71539 command_runner.go:124] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 21:57:20.642253   71539 command_runner.go:124] > [certs] Using existing ca certificate authority
	I0816 21:57:20.642342   71539 command_runner.go:124] > [certs] Using existing apiserver certificate and key on disk
	I0816 21:57:21.060945   71539 command_runner.go:124] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 21:57:21.155187   71539 command_runner.go:124] > [certs] Generating "front-proxy-ca" certificate and key
	I0816 21:57:21.463260   71539 command_runner.go:124] > [certs] Generating "front-proxy-client" certificate and key
	I0816 21:57:21.724726   71539 command_runner.go:124] > [certs] Generating "etcd/ca" certificate and key
	I0816 21:57:21.768589   71539 command_runner.go:124] > [certs] Generating "etcd/server" certificate and key
	I0816 21:57:21.768785   71539 command_runner.go:124] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-20210816215712-6487] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0816 21:57:22.074796   71539 command_runner.go:124] > [certs] Generating "etcd/peer" certificate and key
	I0816 21:57:22.074960   71539 command_runner.go:124] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-20210816215712-6487] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0816 21:57:22.309689   71539 command_runner.go:124] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 21:57:22.598731   71539 command_runner.go:124] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 21:57:22.686196   71539 command_runner.go:124] > [certs] Generating "sa" key and public key
	I0816 21:57:22.686289   71539 command_runner.go:124] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 21:57:22.826571   71539 command_runner.go:124] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 21:57:22.977132   71539 command_runner.go:124] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 21:57:23.135266   71539 command_runner.go:124] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 21:57:23.333461   71539 command_runner.go:124] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 21:57:23.340880   71539 command_runner.go:124] > [kubelet-start] WARNING: unable to stop the kubelet service momentarily: [exit status 5]
	I0816 21:57:23.340994   71539 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 21:57:23.341836   71539 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 21:57:23.341896   71539 command_runner.go:124] > [kubelet-start] Starting the kubelet
	I0816 21:57:23.400701   71539 out.go:204]   - Booting up control plane ...
	I0816 21:57:23.398042   71539 command_runner.go:124] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 21:57:23.400857   71539 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 21:57:23.406061   71539 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 21:57:23.406999   71539 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 21:57:23.407614   71539 command_runner.go:124] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 21:57:23.409632   71539 command_runner.go:124] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 21:57:36.412687   71539 command_runner.go:124] > [apiclient] All control plane components are healthy after 13.003090 seconds
	I0816 21:57:36.412800   71539 command_runner.go:124] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 21:57:36.422365   71539 command_runner.go:124] > [kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 21:57:36.938072   71539 command_runner.go:124] > [upload-certs] Skipping phase. Please see --upload-certs
	I0816 21:57:36.938356   71539 command_runner.go:124] > [mark-control-plane] Marking the node multinode-20210816215712-6487 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 21:57:37.444230   71539 command_runner.go:124] > [bootstrap-token] Using token: 855gdh.xyaqa780mn4syvqg
	I0816 21:57:37.444234   71539 out.go:204]   - Configuring RBAC rules ...
	I0816 21:57:37.444385   71539 command_runner.go:124] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 21:57:37.448695   71539 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 21:57:37.456095   71539 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 21:57:37.457798   71539 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 21:57:37.459435   71539 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 21:57:37.461041   71539 command_runner.go:124] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 21:57:37.466687   71539 command_runner.go:124] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 21:57:37.662585   71539 command_runner.go:124] > [addons] Applied essential addon: CoreDNS
	I0816 21:57:37.851346   71539 command_runner.go:124] > [addons] Applied essential addon: kube-proxy
	I0816 21:57:37.852156   71539 command_runner.go:124] > Your Kubernetes control-plane has initialized successfully!
	I0816 21:57:37.852254   71539 command_runner.go:124] > To start using your cluster, you need to run the following as a regular user:
	I0816 21:57:37.852291   71539 command_runner.go:124] >   mkdir -p $HOME/.kube
	I0816 21:57:37.852380   71539 command_runner.go:124] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 21:57:37.852476   71539 command_runner.go:124] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 21:57:37.852539   71539 command_runner.go:124] > Alternatively, if you are the root user, you can run:
	I0816 21:57:37.852583   71539 command_runner.go:124] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 21:57:37.852626   71539 command_runner.go:124] > You should now deploy a pod network to the cluster.
	I0816 21:57:37.852683   71539 command_runner.go:124] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 21:57:37.852742   71539 command_runner.go:124] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 21:57:37.852841   71539 command_runner.go:124] > You can now join any number of control-plane nodes by copying certificate authorities
	I0816 21:57:37.852941   71539 command_runner.go:124] > and service account keys on each node and then running the following as root:
	I0816 21:57:37.853053   71539 command_runner.go:124] >   kubeadm join control-plane.minikube.internal:8443 --token 855gdh.xyaqa780mn4syvqg \
	I0816 21:57:37.853174   71539 command_runner.go:124] > 	--discovery-token-ca-cert-hash sha256:ab46675face1967228b7500eeaa65be645c3bcc8b24635f14c9becbff4d6cff0 \
	I0816 21:57:37.853243   71539 command_runner.go:124] > 	--control-plane 
	I0816 21:57:37.853368   71539 command_runner.go:124] > Then you can join any number of worker nodes by running the following on each as root:
	I0816 21:57:37.853496   71539 command_runner.go:124] > kubeadm join control-plane.minikube.internal:8443 --token 855gdh.xyaqa780mn4syvqg \
	I0816 21:57:37.853636   71539 command_runner.go:124] > 	--discovery-token-ca-cert-hash sha256:ab46675face1967228b7500eeaa65be645c3bcc8b24635f14c9becbff4d6cff0 
	I0816 21:57:37.855015   71539 command_runner.go:124] ! 	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	I0816 21:57:37.855081   71539 command_runner.go:124] ! 	[WARNING SystemVerification]: missing optional cgroups: hugetlb
	I0816 21:57:37.855259   71539 command_runner.go:124] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.9.0-16-amd64\n", err: exit status 1
	I0816 21:57:37.855347   71539 command_runner.go:124] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 21:57:37.855365   71539 cni.go:93] Creating CNI manager for ""
	I0816 21:57:37.855372   71539 cni.go:154] 1 nodes found, recommending kindnet
	I0816 21:57:37.857300   71539 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 21:57:37.857358   71539 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0816 21:57:37.860624   71539 command_runner.go:124] >   File: /opt/cni/bin/portmap
	I0816 21:57:37.860645   71539 command_runner.go:124] >   Size: 2738488   	Blocks: 5352       IO Block: 4096   regular file
	I0816 21:57:37.860652   71539 command_runner.go:124] > Device: 801h/2049d	Inode: 14944926    Links: 1
	I0816 21:57:37.860659   71539 command_runner.go:124] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0816 21:57:37.860667   71539 command_runner.go:124] > Access: 2021-02-10 15:18:15.000000000 +0000
	I0816 21:57:37.860683   71539 command_runner.go:124] > Modify: 2021-02-10 15:18:15.000000000 +0000
	I0816 21:57:37.860701   71539 command_runner.go:124] > Change: 2021-08-10 20:42:17.279076582 +0000
	I0816 21:57:37.860708   71539 command_runner.go:124] >  Birth: -
	I0816 21:57:37.860748   71539 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0816 21:57:37.860760   71539 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0816 21:57:37.872856   71539 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 21:57:38.197452   71539 command_runner.go:124] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0816 21:57:38.200656   71539 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0816 21:57:38.204918   71539 command_runner.go:124] > serviceaccount/kindnet created
	I0816 21:57:38.210433   71539 command_runner.go:124] > daemonset.apps/kindnet created
	I0816 21:57:38.213976   71539 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 21:57:38.214099   71539 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:57:38.214132   71539 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=multinode-20210816215712-6487 minikube.k8s.io/updated_at=2021_08_16T21_57_38_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:57:38.313760   71539 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0816 21:57:38.317055   71539 command_runner.go:124] > -16
	I0816 21:57:38.317095   71539 ops.go:34] apiserver oom_adj: -16
	I0816 21:57:38.317120   71539 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:57:38.330249   71539 command_runner.go:124] > node/multinode-20210816215712-6487 labeled
	I0816 21:57:38.382707   71539 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0816 21:57:38.883477   71539 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:57:38.942120   71539 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0816 21:57:39.383854   71539 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:57:39.446360   71539 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0816 21:57:39.883964   71539 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:57:39.943631   71539 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0816 21:57:40.383237   71539 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:57:40.463395   71539 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0816 21:57:40.882961   71539 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:57:40.944661   71539 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0816 21:57:41.383291   71539 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:57:41.444098   71539 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0816 21:57:41.883812   71539 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:57:41.946890   71539 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0816 21:57:42.383655   71539 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:57:42.445512   71539 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0816 21:57:42.883028   71539 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:57:42.943114   71539 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0816 21:57:43.383802   71539 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:57:43.447248   71539 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0816 21:57:43.883834   71539 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:57:43.945199   71539 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0816 21:57:44.383745   71539 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:57:44.446255   71539 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0816 21:57:44.883923   71539 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:57:44.945039   71539 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0816 21:57:45.383650   71539 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:57:45.443008   71539 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0816 21:57:45.883605   71539 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:57:45.946410   71539 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0816 21:57:46.383627   71539 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:57:46.445925   71539 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0816 21:57:46.883515   71539 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:57:46.945709   71539 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0816 21:57:47.383456   71539 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:57:47.444768   71539 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0816 21:57:47.883255   71539 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:57:47.943196   71539 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0816 21:57:48.382924   71539 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:57:48.445640   71539 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0816 21:57:48.883110   71539 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:57:48.945633   71539 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0816 21:57:49.382939   71539 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:57:49.444819   71539 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0816 21:57:49.883365   71539 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:57:51.824181   71539 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0816 21:57:51.824226   71539 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.940829233s)
	I0816 21:57:51.883360   71539 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:57:54.400121   71539 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0816 21:57:54.402684   71539 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (2.519287414s)
	I0816 21:57:54.883256   71539 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 21:57:55.136965   71539 command_runner.go:124] > NAME      SECRETS   AGE
	I0816 21:57:55.136991   71539 command_runner.go:124] > default   1         1s
	I0816 21:57:55.138934   71539 kubeadm.go:985] duration metric: took 16.924888191s to wait for elevateKubeSystemPrivileges.
	I0816 21:57:55.138956   71539 kubeadm.go:392] StartCluster complete in 34.801268587s
	I0816 21:57:55.138976   71539 settings.go:142] acquiring lock: {Name:mk71c9e00e0a208f5191d6b85d29a074b46503a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 21:57:55.139057   71539 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 21:57:55.139969   71539 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 21:57:55.140362   71539 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 21:57:55.140553   71539 kapi.go:59] client config for multinode-20210816215712-6487: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487
/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e3460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 21:57:55.140913   71539 cert_rotation.go:137] Starting client certificate rotation controller
	I0816 21:57:55.142212   71539 round_trippers.go:432] GET https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0816 21:57:55.142228   71539 round_trippers.go:438] Request Headers:
	I0816 21:57:55.142235   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:57:55.142244   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:57:55.148706   71539 round_trippers.go:457] Response Status: 200 OK in 6 milliseconds
	I0816 21:57:55.148723   71539 round_trippers.go:460] Response Headers:
	I0816 21:57:55.148729   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:57:55.148732   71539 round_trippers.go:463]     Content-Length: 291
	I0816 21:57:55.148735   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:57:55 GMT
	I0816 21:57:55.148737   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:57:55.148740   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:57:55.148743   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:57:55.148759   71539 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3b60c8bc-a590-4203-bd1c-3bd5092852e8","resourceVersion":"448","creationTimestamp":"2021-08-16T21:57:37Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0816 21:57:55.149289   71539 request.go:1123] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3b60c8bc-a590-4203-bd1c-3bd5092852e8","resourceVersion":"448","creationTimestamp":"2021-08-16T21:57:37Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0816 21:57:55.149330   71539 round_trippers.go:432] PUT https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0816 21:57:55.149339   71539 round_trippers.go:438] Request Headers:
	I0816 21:57:55.149343   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:57:55.149347   71539 round_trippers.go:442]     Content-Type: application/json
	I0816 21:57:55.149353   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:57:55.151839   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:57:55.151855   71539 round_trippers.go:460] Response Headers:
	I0816 21:57:55.151861   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:57:55.151866   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:57:55.151871   71539 round_trippers.go:463]     Content-Length: 291
	I0816 21:57:55.151875   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:57:55 GMT
	I0816 21:57:55.151880   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:57:55.151885   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:57:55.151920   71539 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3b60c8bc-a590-4203-bd1c-3bd5092852e8","resourceVersion":"450","creationTimestamp":"2021-08-16T21:57:37Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0816 21:57:55.652644   71539 round_trippers.go:432] GET https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0816 21:57:55.652667   71539 round_trippers.go:438] Request Headers:
	I0816 21:57:55.652672   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:57:55.652676   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:57:55.654701   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:57:55.654722   71539 round_trippers.go:460] Response Headers:
	I0816 21:57:55.654727   71539 round_trippers.go:463]     Content-Length: 291
	I0816 21:57:55.654730   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:57:55 GMT
	I0816 21:57:55.654733   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:57:55.654736   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:57:55.654739   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:57:55.654742   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:57:55.654765   71539 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3b60c8bc-a590-4203-bd1c-3bd5092852e8","resourceVersion":"460","creationTimestamp":"2021-08-16T21:57:37Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0816 21:57:55.654861   71539 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-20210816215712-6487" rescaled to 1
	I0816 21:57:55.654905   71539 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 21:57:55.654908   71539 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 21:57:55.657759   71539 out.go:177] * Verifying Kubernetes components...
	I0816 21:57:55.657818   71539 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 21:57:55.654992   71539 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0816 21:57:55.655166   71539 config.go:177] Loaded profile config "multinode-20210816215712-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 21:57:55.657902   71539 addons.go:59] Setting storage-provisioner=true in profile "multinode-20210816215712-6487"
	I0816 21:57:55.657924   71539 addons.go:135] Setting addon storage-provisioner=true in "multinode-20210816215712-6487"
	W0816 21:57:55.657935   71539 addons.go:147] addon storage-provisioner should already be in state true
	I0816 21:57:55.657940   71539 addons.go:59] Setting default-storageclass=true in profile "multinode-20210816215712-6487"
	I0816 21:57:55.657959   71539 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-20210816215712-6487"
	I0816 21:57:55.657964   71539 host.go:66] Checking if "multinode-20210816215712-6487" exists ...
	I0816 21:57:55.658349   71539 cli_runner.go:115] Run: docker container inspect multinode-20210816215712-6487 --format={{.State.Status}}
	I0816 21:57:55.658533   71539 cli_runner.go:115] Run: docker container inspect multinode-20210816215712-6487 --format={{.State.Status}}
	I0816 21:57:55.707140   71539 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 21:57:55.707315   71539 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 21:57:55.707332   71539 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 21:57:55.707427   71539 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210816215712-6487
	I0816 21:57:55.708287   71539 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 21:57:55.708638   71539 kapi.go:59] client config for multinode-20210816215712-6487: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487
/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e3460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 21:57:55.710550   71539 round_trippers.go:432] GET https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0816 21:57:55.710570   71539 round_trippers.go:438] Request Headers:
	I0816 21:57:55.710584   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:57:55.710590   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:57:55.712654   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:57:55.712674   71539 round_trippers.go:460] Response Headers:
	I0816 21:57:55.712680   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:57:55 GMT
	I0816 21:57:55.712685   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:57:55.712690   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:57:55.712694   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:57:55.712699   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:57:55.712703   71539 round_trippers.go:463]     Content-Length: 109
	I0816 21:57:55.712721   71539 request.go:1123] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"461"},"items":[]}
	I0816 21:57:55.713487   71539 addons.go:135] Setting addon default-storageclass=true in "multinode-20210816215712-6487"
	W0816 21:57:55.713505   71539 addons.go:147] addon default-storageclass should already be in state true
	I0816 21:57:55.713532   71539 host.go:66] Checking if "multinode-20210816215712-6487" exists ...
	I0816 21:57:55.713909   71539 cli_runner.go:115] Run: docker container inspect multinode-20210816215712-6487 --format={{.State.Status}}
	I0816 21:57:55.736235   71539 command_runner.go:124] > apiVersion: v1
	I0816 21:57:55.736259   71539 command_runner.go:124] > data:
	I0816 21:57:55.736266   71539 command_runner.go:124] >   Corefile: |
	I0816 21:57:55.736272   71539 command_runner.go:124] >     .:53 {
	I0816 21:57:55.736277   71539 command_runner.go:124] >         errors
	I0816 21:57:55.736284   71539 command_runner.go:124] >         health {
	I0816 21:57:55.736291   71539 command_runner.go:124] >            lameduck 5s
	I0816 21:57:55.736296   71539 command_runner.go:124] >         }
	I0816 21:57:55.736302   71539 command_runner.go:124] >         ready
	I0816 21:57:55.736312   71539 command_runner.go:124] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0816 21:57:55.736322   71539 command_runner.go:124] >            pods insecure
	I0816 21:57:55.736338   71539 command_runner.go:124] >            fallthrough in-addr.arpa ip6.arpa
	I0816 21:57:55.736349   71539 command_runner.go:124] >            ttl 30
	I0816 21:57:55.736359   71539 command_runner.go:124] >         }
	I0816 21:57:55.736369   71539 command_runner.go:124] >         prometheus :9153
	I0816 21:57:55.736380   71539 command_runner.go:124] >         forward . /etc/resolv.conf {
	I0816 21:57:55.736391   71539 command_runner.go:124] >            max_concurrent 1000
	I0816 21:57:55.736398   71539 command_runner.go:124] >         }
	I0816 21:57:55.736404   71539 command_runner.go:124] >         cache 30
	I0816 21:57:55.736413   71539 command_runner.go:124] >         loop
	I0816 21:57:55.736419   71539 command_runner.go:124] >         reload
	I0816 21:57:55.736428   71539 command_runner.go:124] >         loadbalance
	I0816 21:57:55.736433   71539 command_runner.go:124] >     }
	I0816 21:57:55.736443   71539 command_runner.go:124] > kind: ConfigMap
	I0816 21:57:55.736448   71539 command_runner.go:124] > metadata:
	I0816 21:57:55.736461   71539 command_runner.go:124] >   creationTimestamp: "2021-08-16T21:57:37Z"
	I0816 21:57:55.736470   71539 command_runner.go:124] >   name: coredns
	I0816 21:57:55.736476   71539 command_runner.go:124] >   namespace: kube-system
	I0816 21:57:55.736562   71539 command_runner.go:124] >   resourceVersion: "269"
	I0816 21:57:55.736588   71539 command_runner.go:124] >   uid: 56dbc674-c7c8-4792-850c-a7aeaaf4bda0
	I0816 21:57:55.738671   71539 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 21:57:55.739000   71539 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 21:57:55.739258   71539 kapi.go:59] client config for multinode-20210816215712-6487: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487
/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e3460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 21:57:55.740764   71539 node_ready.go:35] waiting up to 6m0s for node "multinode-20210816215712-6487" to be "Ready" ...
	I0816 21:57:55.740852   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:57:55.740860   71539 round_trippers.go:438] Request Headers:
	I0816 21:57:55.740867   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:57:55.740873   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:57:55.742968   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:57:55.742983   71539 round_trippers.go:460] Response Headers:
	I0816 21:57:55.742988   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:57:55 GMT
	I0816 21:57:55.742992   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:57:55.742996   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:57:55.743007   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:57:55.743012   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:57:55.743138   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:57:55.745051   71539 node_ready.go:49] node "multinode-20210816215712-6487" has status "Ready":"True"
	I0816 21:57:55.745070   71539 node_ready.go:38] duration metric: took 4.277668ms waiting for node "multinode-20210816215712-6487" to be "Ready" ...
	I0816 21:57:55.745082   71539 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 21:57:55.745187   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0816 21:57:55.745201   71539 round_trippers.go:438] Request Headers:
	I0816 21:57:55.745210   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:57:55.745216   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:57:55.750752   71539 round_trippers.go:457] Response Status: 200 OK in 5 milliseconds
	I0816 21:57:55.750774   71539 round_trippers.go:460] Response Headers:
	I0816 21:57:55.750781   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:57:55.750788   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:57:55.750793   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:57:55.750797   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:57:55.750801   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:57:55 GMT
	I0816 21:57:55.751263   71539 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"461"},"items":[{"metadata":{"name":"coredns-558bd4d5db-chcnk","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"97987b37-d278-4eb4-8573-eac407b1d4f2","resourceVersion":"454","creationTimestamp":"2021-08-16T21:57:55Z","deletionTimestamp":"2021-08-16T21:58:25Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc511
6474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:control [truncated 54662 chars]
	I0816 21:57:55.760193   71539 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-chcnk" in "kube-system" namespace to be "Ready" ...
	I0816 21:57:55.760281   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-chcnk
	I0816 21:57:55.760315   71539 round_trippers.go:438] Request Headers:
	I0816 21:57:55.760327   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:57:55.760333   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:57:55.762335   71539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/multinode-20210816215712-6487/id_rsa Username:docker}
	I0816 21:57:55.762608   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:57:55.762657   71539 round_trippers.go:460] Response Headers:
	I0816 21:57:55.762670   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:57:55.762676   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:57:55.762681   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:57:55.762687   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:57:55 GMT
	I0816 21:57:55.762693   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:57:55.762831   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-chcnk","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"97987b37-d278-4eb4-8573-eac407b1d4f2","resourceVersion":"454","creationTimestamp":"2021-08-16T21:57:55Z","deletionTimestamp":"2021-08-16T21:58:25Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5701 chars]
	I0816 21:57:55.766902   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:57:55.766925   71539 round_trippers.go:438] Request Headers:
	I0816 21:57:55.766932   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:57:55.766936   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:57:55.768831   71539 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 21:57:55.768851   71539 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 21:57:55.768901   71539 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210816215712-6487
	I0816 21:57:55.805509   71539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/multinode-20210816215712-6487/id_rsa Username:docker}
	I0816 21:57:55.813073   71539 round_trippers.go:457] Response Status: 200 OK in 46 milliseconds
	I0816 21:57:55.813094   71539 round_trippers.go:460] Response Headers:
	I0816 21:57:55.813101   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:57:55.813105   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:57:55.813109   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:57:55.813114   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:57:55 GMT
	I0816 21:57:55.813118   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:57:55.813256   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:57:55.929213   71539 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 21:57:55.929892   71539 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 21:57:56.314319   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-chcnk
	I0816 21:57:56.314346   71539 round_trippers.go:438] Request Headers:
	I0816 21:57:56.314355   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:57:56.314361   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:57:56.317126   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:57:56.317150   71539 round_trippers.go:460] Response Headers:
	I0816 21:57:56.317157   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:57:56.317162   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:57:56.317168   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:57:56.317173   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:57:56.317178   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:57:56 GMT
	I0816 21:57:56.317690   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-chcnk","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"97987b37-d278-4eb4-8573-eac407b1d4f2","resourceVersion":"454","creationTimestamp":"2021-08-16T21:57:55Z","deletionTimestamp":"2021-08-16T21:58:25Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5701 chars]
	I0816 21:57:56.318123   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:57:56.318149   71539 round_trippers.go:438] Request Headers:
	I0816 21:57:56.318157   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:57:56.318162   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:57:56.321967   71539 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0816 21:57:56.321985   71539 round_trippers.go:460] Response Headers:
	I0816 21:57:56.321992   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:57:56.321997   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:57:56.322001   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:57:56.322006   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:57:56.322010   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:57:56 GMT
	I0816 21:57:56.322384   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:57:56.423861   71539 command_runner.go:124] > configmap/coredns replaced
	I0816 21:57:56.423915   71539 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0816 21:57:56.632673   71539 command_runner.go:124] > serviceaccount/storage-provisioner created
	I0816 21:57:56.637457   71539 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0816 21:57:56.643127   71539 command_runner.go:124] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0816 21:57:56.712554   71539 command_runner.go:124] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0816 21:57:56.721560   71539 command_runner.go:124] > endpoints/k8s.io-minikube-hostpath created
	I0816 21:57:56.731257   71539 command_runner.go:124] > pod/storage-provisioner created
	I0816 21:57:56.735650   71539 command_runner.go:124] > storageclass.storage.k8s.io/standard created
	I0816 21:57:56.737551   71539 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0816 21:57:56.737575   71539 addons.go:344] enableAddons completed in 1.082593472s
	I0816 21:57:56.813742   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-chcnk
	I0816 21:57:56.813761   71539 round_trippers.go:438] Request Headers:
	I0816 21:57:56.813766   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:57:56.813770   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:57:56.815601   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:57:56.815618   71539 round_trippers.go:460] Response Headers:
	I0816 21:57:56.815623   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:57:56.815627   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:57:56.815631   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:57:56 GMT
	I0816 21:57:56.815637   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:57:56.815641   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:57:56.815751   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-chcnk","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"97987b37-d278-4eb4-8573-eac407b1d4f2","resourceVersion":"454","creationTimestamp":"2021-08-16T21:57:55Z","deletionTimestamp":"2021-08-16T21:58:25Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5701 chars]
	I0816 21:57:56.816136   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:57:56.816154   71539 round_trippers.go:438] Request Headers:
	I0816 21:57:56.816163   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:57:56.816169   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:57:56.817840   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:57:56.817856   71539 round_trippers.go:460] Response Headers:
	I0816 21:57:56.817863   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:57:56.817869   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:57:56.817874   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:57:56.817879   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:57:56.817884   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:57:56 GMT
	I0816 21:57:56.817966   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:57:57.314600   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-chcnk
	I0816 21:57:57.314624   71539 round_trippers.go:438] Request Headers:
	I0816 21:57:57.314631   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:57:57.314637   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:57:57.316567   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:57:57.316594   71539 round_trippers.go:460] Response Headers:
	I0816 21:57:57.316600   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:57:57.316605   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:57:57.316610   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:57:57.316614   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:57:57.316619   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:57:57 GMT
	I0816 21:57:57.316734   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-chcnk","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"97987b37-d278-4eb4-8573-eac407b1d4f2","resourceVersion":"454","creationTimestamp":"2021-08-16T21:57:55Z","deletionTimestamp":"2021-08-16T21:58:25Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5701 chars]
	I0816 21:57:57.317146   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:57:57.317162   71539 round_trippers.go:438] Request Headers:
	I0816 21:57:57.317169   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:57:57.317176   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:57:57.318745   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:57:57.318762   71539 round_trippers.go:460] Response Headers:
	I0816 21:57:57.318767   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:57:57.318772   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:57:57.318777   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:57:57.318781   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:57:57.318789   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:57:57 GMT
	I0816 21:57:57.318896   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:57:57.814690   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-chcnk
	I0816 21:57:57.814718   71539 round_trippers.go:438] Request Headers:
	I0816 21:57:57.814724   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:57:57.814728   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:57:57.816768   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:57:57.816789   71539 round_trippers.go:460] Response Headers:
	I0816 21:57:57.816795   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:57:57.816800   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:57:57 GMT
	I0816 21:57:57.816804   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:57:57.816809   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:57:57.816813   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:57:57.816957   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-chcnk","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"97987b37-d278-4eb4-8573-eac407b1d4f2","resourceVersion":"454","creationTimestamp":"2021-08-16T21:57:55Z","deletionTimestamp":"2021-08-16T21:58:25Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5701 chars]
	I0816 21:57:57.817315   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:57:57.817328   71539 round_trippers.go:438] Request Headers:
	I0816 21:57:57.817332   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:57:57.817336   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:57:57.818932   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:57:57.818946   71539 round_trippers.go:460] Response Headers:
	I0816 21:57:57.818952   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:57:57 GMT
	I0816 21:57:57.818956   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:57:57.818960   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:57:57.818964   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:57:57.818969   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:57:57.819056   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:57:57.819284   71539 pod_ready.go:102] pod "coredns-558bd4d5db-chcnk" in "kube-system" namespace has status "Ready":"False"
	I0816 21:57:58.314691   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-chcnk
	I0816 21:57:58.314713   71539 round_trippers.go:438] Request Headers:
	I0816 21:57:58.314720   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:57:58.314726   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:57:58.317193   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:57:58.317215   71539 round_trippers.go:460] Response Headers:
	I0816 21:57:58.317222   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:57:58.317227   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:57:58.317232   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:57:58.317237   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:57:58.317242   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:57:58 GMT
	I0816 21:57:58.317342   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-chcnk","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"97987b37-d278-4eb4-8573-eac407b1d4f2","resourceVersion":"454","creationTimestamp":"2021-08-16T21:57:55Z","deletionTimestamp":"2021-08-16T21:58:25Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5701 chars]
	I0816 21:57:58.317686   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:57:58.317700   71539 round_trippers.go:438] Request Headers:
	I0816 21:57:58.317705   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:57:58.317709   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:57:58.319365   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:57:58.319385   71539 round_trippers.go:460] Response Headers:
	I0816 21:57:58.319393   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:57:58.319398   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:57:58.319403   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:57:58.319407   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:57:58.319410   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:57:58 GMT
	I0816 21:57:58.319530   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:57:58.814101   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-chcnk
	I0816 21:57:58.814125   71539 round_trippers.go:438] Request Headers:
	I0816 21:57:58.814133   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:57:58.814139   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:57:58.816326   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:57:58.816343   71539 round_trippers.go:460] Response Headers:
	I0816 21:57:58.816348   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:57:58.816351   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:57:58.816354   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:57:58.816357   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:57:58.816362   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:57:58 GMT
	I0816 21:57:58.816428   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-chcnk","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"97987b37-d278-4eb4-8573-eac407b1d4f2","resourceVersion":"454","creationTimestamp":"2021-08-16T21:57:55Z","deletionTimestamp":"2021-08-16T21:58:25Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5701 chars]
	I0816 21:57:58.816735   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:57:58.816747   71539 round_trippers.go:438] Request Headers:
	I0816 21:57:58.816751   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:57:58.816755   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:57:58.818363   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:57:58.818379   71539 round_trippers.go:460] Response Headers:
	I0816 21:57:58.818383   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:57:58.818387   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:57:58.818390   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:57:58.818430   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:57:58.818438   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:57:58 GMT
	I0816 21:57:58.818526   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:57:59.313832   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-chcnk
	I0816 21:57:59.313854   71539 round_trippers.go:438] Request Headers:
	I0816 21:57:59.313859   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:57:59.313864   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:57:59.316225   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:57:59.316246   71539 round_trippers.go:460] Response Headers:
	I0816 21:57:59.316253   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:57:59.316258   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:57:59.316263   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:57:59 GMT
	I0816 21:57:59.316267   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:57:59.316272   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:57:59.316371   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-chcnk","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"97987b37-d278-4eb4-8573-eac407b1d4f2","resourceVersion":"454","creationTimestamp":"2021-08-16T21:57:55Z","deletionTimestamp":"2021-08-16T21:58:25Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5701 chars]
	I0816 21:57:59.316711   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:57:59.316726   71539 round_trippers.go:438] Request Headers:
	I0816 21:57:59.316731   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:57:59.316735   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:57:59.318315   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:57:59.318333   71539 round_trippers.go:460] Response Headers:
	I0816 21:57:59.318342   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:57:59.318347   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:57:59.318352   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:57:59.318356   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:57:59.318360   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:57:59 GMT
	I0816 21:57:59.318524   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:57:59.813989   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-chcnk
	I0816 21:57:59.814010   71539 round_trippers.go:438] Request Headers:
	I0816 21:57:59.814015   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:57:59.814019   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:57:59.816219   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:57:59.816239   71539 round_trippers.go:460] Response Headers:
	I0816 21:57:59.816243   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:57:59.816246   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:57:59.816249   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:57:59.816253   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:57:59 GMT
	I0816 21:57:59.816255   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:57:59.816423   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-chcnk","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"97987b37-d278-4eb4-8573-eac407b1d4f2","resourceVersion":"454","creationTimestamp":"2021-08-16T21:57:55Z","deletionTimestamp":"2021-08-16T21:58:25Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5701 chars]
	I0816 21:57:59.816748   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:57:59.816761   71539 round_trippers.go:438] Request Headers:
	I0816 21:57:59.816766   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:57:59.816770   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:57:59.818284   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:57:59.818298   71539 round_trippers.go:460] Response Headers:
	I0816 21:57:59.818302   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:57:59 GMT
	I0816 21:57:59.818306   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:57:59.818309   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:57:59.818314   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:57:59.818318   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:57:59.818396   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:00.313979   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-chcnk
	I0816 21:58:00.314001   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:00.314006   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:00.314010   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:00.316394   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:00.316412   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:00.316417   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:00.316420   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:00.316424   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:00.316429   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:00.316433   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:00 GMT
	I0816 21:58:00.316532   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-chcnk","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"97987b37-d278-4eb4-8573-eac407b1d4f2","resourceVersion":"454","creationTimestamp":"2021-08-16T21:57:55Z","deletionTimestamp":"2021-08-16T21:58:25Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5701 chars]
	I0816 21:58:00.316927   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:00.316942   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:00.316949   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:00.316955   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:00.318530   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:00.318548   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:00.318554   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:00.318558   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:00.318563   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:00.318567   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:00.318572   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:00 GMT
	I0816 21:58:00.318661   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:00.318902   71539 pod_ready.go:102] pod "coredns-558bd4d5db-chcnk" in "kube-system" namespace has status "Ready":"False"
	I0816 21:58:00.814198   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-chcnk
	I0816 21:58:00.814223   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:00.814230   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:00.814237   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:00.816528   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:00.816549   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:00.816555   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:00.816560   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:00.816565   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:00 GMT
	I0816 21:58:00.816569   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:00.816574   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:00.816656   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-chcnk","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"97987b37-d278-4eb4-8573-eac407b1d4f2","resourceVersion":"454","creationTimestamp":"2021-08-16T21:57:55Z","deletionTimestamp":"2021-08-16T21:58:25Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5701 chars]
	I0816 21:58:00.816990   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:00.817003   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:00.817008   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:00.817012   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:00.818607   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:00.818642   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:00.818649   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:00.818654   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:00.818659   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:00 GMT
	I0816 21:58:00.818663   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:00.818668   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:00.818825   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:01.314402   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-chcnk
	I0816 21:58:01.314428   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:01.314435   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:01.314442   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:01.317531   71539 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0816 21:58:01.317551   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:01.317558   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:01.317563   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:01.317567   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:01.317572   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:01.317576   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:01 GMT
	I0816 21:58:01.317739   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-chcnk","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"97987b37-d278-4eb4-8573-eac407b1d4f2","resourceVersion":"454","creationTimestamp":"2021-08-16T21:57:55Z","deletionTimestamp":"2021-08-16T21:58:25Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5701 chars]
	I0816 21:58:01.318064   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:01.318084   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:01.318089   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:01.318093   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:01.319733   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:01.319748   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:01.319752   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:01.319756   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:01.319759   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:01.319763   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:01.319767   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:01 GMT
	I0816 21:58:01.319882   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:01.814551   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-chcnk
	I0816 21:58:01.814573   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:01.814585   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:01.814590   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:01.816601   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:01.816618   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:01.816623   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:01.816627   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:01.816629   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:01.816632   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:01 GMT
	I0816 21:58:01.816637   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:01.816753   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-chcnk","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"97987b37-d278-4eb4-8573-eac407b1d4f2","resourceVersion":"454","creationTimestamp":"2021-08-16T21:57:55Z","deletionTimestamp":"2021-08-16T21:58:25Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5701 chars]
	I0816 21:58:01.817065   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:01.817085   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:01.817089   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:01.817093   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:01.818603   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:01.818615   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:01.818620   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:01.818629   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:01.818633   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:01 GMT
	I0816 21:58:01.818635   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:01.818639   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:01.818802   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:02.314479   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-chcnk
	I0816 21:58:02.314502   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:02.314508   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:02.314512   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:02.316751   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:02.316770   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:02.316774   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:02.316778   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:02.316781   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:02.316784   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:02 GMT
	I0816 21:58:02.316788   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:02.316901   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-chcnk","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"97987b37-d278-4eb4-8573-eac407b1d4f2","resourceVersion":"454","creationTimestamp":"2021-08-16T21:57:55Z","deletionTimestamp":"2021-08-16T21:58:25Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5701 chars]
	I0816 21:58:02.317266   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:02.317283   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:02.317288   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:02.317292   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:02.318895   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:02.318915   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:02.318922   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:02 GMT
	I0816 21:58:02.318927   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:02.318931   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:02.318936   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:02.318943   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:02.319034   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:02.319267   71539 pod_ready.go:102] pod "coredns-558bd4d5db-chcnk" in "kube-system" namespace has status "Ready":"False"
	I0816 21:58:02.813874   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-chcnk
	I0816 21:58:02.813895   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:02.813900   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:02.813904   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:02.816130   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:02.816154   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:02.816166   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:02.816173   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:02.816186   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:02.816192   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:02.816196   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:02 GMT
	I0816 21:58:02.816299   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-chcnk","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"97987b37-d278-4eb4-8573-eac407b1d4f2","resourceVersion":"454","creationTimestamp":"2021-08-16T21:57:55Z","deletionTimestamp":"2021-08-16T21:58:25Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5701 chars]
	I0816 21:58:02.816628   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:02.816644   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:02.816650   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:02.816654   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:02.818323   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:02.818338   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:02.818344   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:02.818350   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:02 GMT
	I0816 21:58:02.818355   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:02.818359   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:02.818364   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:02.818545   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:03.313975   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-chcnk
	I0816 21:58:03.313996   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:03.314002   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:03.314006   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:03.316183   71539 round_trippers.go:457] Response Status: 404 Not Found in 2 milliseconds
	I0816 21:58:03.316202   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:03.316207   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:03.316211   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:03.316214   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:03.316217   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:03.316220   71539 round_trippers.go:463]     Content-Length: 216
	I0816 21:58:03.316224   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:03 GMT
	I0816 21:58:03.316248   71539 request.go:1123] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"coredns-558bd4d5db-chcnk\" not found","reason":"NotFound","details":{"name":"coredns-558bd4d5db-chcnk","kind":"pods"},"code":404}
	I0816 21:58:03.316637   71539 pod_ready.go:97] error getting pod "coredns-558bd4d5db-chcnk" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-chcnk" not found
	I0816 21:58:03.316657   71539 pod_ready.go:81] duration metric: took 7.556436495s waiting for pod "coredns-558bd4d5db-chcnk" in "kube-system" namespace to be "Ready" ...
	E0816 21:58:03.316666   71539 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-chcnk" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-chcnk" not found
	I0816 21:58:03.316673   71539 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-h25nx" in "kube-system" namespace to be "Ready" ...
	I0816 21:58:03.316716   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:03.316724   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:03.316728   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:03.316732   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:03.318658   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:03.318675   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:03.318680   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:03.318683   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:03.318686   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:03.318689   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:03.318692   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:03 GMT
	I0816 21:58:03.318772   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:03.319127   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:03.319143   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:03.319150   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:03.319156   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:03.320845   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:03.320862   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:03.320868   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:03.320873   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:03.320877   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:03.320882   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:03.320886   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:03 GMT
	I0816 21:58:03.320982   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:03.822010   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:03.822036   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:03.822043   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:03.822049   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:03.824314   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:03.824334   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:03.824340   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:03.824345   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:03.824349   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:03 GMT
	I0816 21:58:03.824354   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:03.824359   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:03.824462   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:03.824797   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:03.824811   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:03.824816   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:03.824819   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:03.826460   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:03.826480   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:03.826486   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:03.826492   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:03 GMT
	I0816 21:58:03.826497   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:03.826502   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:03.826511   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:03.826627   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:04.322239   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:04.322262   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:04.322269   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:04.322276   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:04.324660   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:04.324683   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:04.324690   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:04.324695   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:04 GMT
	I0816 21:58:04.324700   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:04.324705   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:04.324709   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:04.324798   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:04.325122   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:04.325135   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:04.325140   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:04.325144   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:04.326716   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:04.326732   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:04.326738   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:04.326743   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:04.326748   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:04 GMT
	I0816 21:58:04.326752   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:04.326757   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:04.326857   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:04.821422   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:04.821444   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:04.821449   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:04.821453   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:04.823241   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:04.823265   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:04.823270   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:04.823274   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:04.823277   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:04.823280   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:04.823283   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:04 GMT
	I0816 21:58:04.823367   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:04.823657   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:04.823669   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:04.823674   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:04.823678   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:04.825185   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:04.825206   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:04.825213   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:04 GMT
	I0816 21:58:04.825218   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:04.825222   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:04.825227   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:04.825232   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:04.825325   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:05.322094   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:05.322116   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:05.322122   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:05.322126   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:05.324482   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:05.324501   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:05.324506   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:05.324509   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:05.324513   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:05.324516   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:05.324519   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:05 GMT
	I0816 21:58:05.324628   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:05.324939   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:05.324952   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:05.324957   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:05.324961   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:05.326517   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:05.326536   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:05.326542   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:05.326547   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:05 GMT
	I0816 21:58:05.326552   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:05.326556   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:05.326560   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:05.326635   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:05.326911   71539 pod_ready.go:102] pod "coredns-558bd4d5db-h25nx" in "kube-system" namespace has status "Ready":"False"
	I0816 21:58:05.822213   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:05.822234   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:05.822239   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:05.822243   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:05.824228   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:05.824245   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:05.824249   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:05.824253   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:05.824256   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:05.824259   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:05 GMT
	I0816 21:58:05.824264   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:05.824358   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:05.824692   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:05.824706   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:05.824711   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:05.824715   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:05.826193   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:05.826207   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:05.826212   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:05.826215   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:05.826218   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:05 GMT
	I0816 21:58:05.826221   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:05.826224   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:05.826333   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:06.321648   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:06.321674   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:06.321679   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:06.321683   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:06.323752   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:06.323770   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:06.323774   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:06.323778   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:06.323781   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:06.323784   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:06.323787   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:06 GMT
	I0816 21:58:06.323942   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:06.324276   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:06.324289   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:06.324294   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:06.324297   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:06.325894   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:06.325920   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:06.325927   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:06.325932   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:06.325936   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:06.325941   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:06.325946   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:06 GMT
	I0816 21:58:06.326073   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:06.821463   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:06.821488   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:06.821495   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:06.821501   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:06.823574   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:06.823593   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:06.823600   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:06.823605   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:06.823620   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:06.823628   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:06.823632   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:06 GMT
	I0816 21:58:06.823733   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:06.824156   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:06.824171   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:06.824176   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:06.824182   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:06.825849   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:06.825864   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:06.825869   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:06 GMT
	I0816 21:58:06.825874   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:06.825879   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:06.825884   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:06.825889   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:06.826001   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:07.321587   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:07.321618   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:07.321625   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:07.321631   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:07.323710   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:07.323729   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:07.323735   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:07 GMT
	I0816 21:58:07.323740   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:07.323744   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:07.323749   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:07.323753   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:07.323988   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:07.324299   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:07.324313   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:07.324318   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:07.324321   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:07.325864   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:07.325877   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:07.325883   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:07 GMT
	I0816 21:58:07.325888   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:07.325892   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:07.325896   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:07.325900   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:07.325995   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:07.821692   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:07.821718   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:07.821723   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:07.821728   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:07.823872   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:07.823891   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:07.823895   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:07.823899   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:07.823923   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:07.823928   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:07.823931   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:07 GMT
	I0816 21:58:07.824023   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:07.824370   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:07.824383   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:07.824388   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:07.824392   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:07.826018   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:07.826035   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:07.826041   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:07.826046   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:07 GMT
	I0816 21:58:07.826050   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:07.826055   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:07.826062   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:07.826207   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:07.826424   71539 pod_ready.go:102] pod "coredns-558bd4d5db-h25nx" in "kube-system" namespace has status "Ready":"False"
	I0816 21:58:08.321729   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:08.321754   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:08.321760   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:08.321764   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:08.324024   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:08.324045   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:08.324050   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:08.324054   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:08.324057   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:08 GMT
	I0816 21:58:08.324060   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:08.324063   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:08.324201   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:08.324617   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:08.324635   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:08.324643   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:08.324649   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:08.326284   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:08.326303   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:08.326307   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:08.326310   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:08.326314   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:08.326316   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:08 GMT
	I0816 21:58:08.326319   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:08.326419   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:08.822042   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:08.822067   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:08.822073   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:08.822077   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:08.826259   71539 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0816 21:58:08.826280   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:08.826287   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:08 GMT
	I0816 21:58:08.826291   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:08.826296   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:08.826301   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:08.826306   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:08.826405   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:08.826734   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:08.826747   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:08.826752   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:08.826755   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:08.828222   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:08.828240   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:08.828246   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:08.828251   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:08.828255   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:08.828260   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:08 GMT
	I0816 21:58:08.828265   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:08.828383   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:09.321680   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:09.321705   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:09.321716   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:09.321721   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:09.324310   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:09.324332   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:09.324336   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:09.324340   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:09.324343   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:09.324346   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:09.324350   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:09 GMT
	I0816 21:58:09.324828   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:09.325369   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:09.325406   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:09.325427   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:09.325446   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:09.327836   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:09.327855   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:09.327859   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:09.327863   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:09 GMT
	I0816 21:58:09.327866   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:09.327868   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:09.327871   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:09.327970   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:09.821458   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:09.821483   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:09.821488   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:09.821492   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:09.823634   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:09.823652   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:09.823657   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:09.823660   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:09.823663   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:09.823666   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:09 GMT
	I0816 21:58:09.823669   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:09.823751   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:09.824068   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:09.824082   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:09.824095   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:09.824099   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:09.825645   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:09.825660   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:09.825665   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:09.825668   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:09.825671   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:09.825674   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:09.825678   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:09 GMT
	I0816 21:58:09.825809   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:10.321372   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:10.321395   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:10.321400   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:10.321405   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:10.323820   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:10.323838   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:10.323842   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:10.323846   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:10.323849   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:10.323852   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:10.323855   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:10 GMT
	I0816 21:58:10.323970   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:10.324309   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:10.324325   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:10.324331   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:10.324336   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:10.325971   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:10.325985   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:10.325990   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:10.325993   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:10 GMT
	I0816 21:58:10.325996   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:10.326001   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:10.326005   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:10.326173   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:10.326417   71539 pod_ready.go:102] pod "coredns-558bd4d5db-h25nx" in "kube-system" namespace has status "Ready":"False"
	I0816 21:58:10.821712   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:10.821735   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:10.821740   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:10.821744   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:10.823997   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:10.824017   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:10.824024   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:10.824029   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:10.824035   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:10.824039   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:10.824043   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:10 GMT
	I0816 21:58:10.824128   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:10.824514   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:10.824529   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:10.824534   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:10.824538   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:10.826098   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:10.826115   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:10.826121   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:10.826126   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:10.826130   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:10.826135   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:10.826139   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:10 GMT
	I0816 21:58:10.826238   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:11.321756   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:11.321780   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:11.321785   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:11.321789   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:11.326556   71539 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0816 21:58:11.326579   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:11.326585   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:11 GMT
	I0816 21:58:11.326590   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:11.326595   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:11.326599   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:11.326604   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:11.326743   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:11.327048   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:11.327059   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:11.327064   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:11.327076   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:11.328691   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:11.328707   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:11.328714   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:11.328721   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:11.328725   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:11.328729   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:11.328734   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:11 GMT
	I0816 21:58:11.328860   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:11.821399   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:11.821421   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:11.821427   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:11.821431   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:11.823571   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:11.823592   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:11.823601   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:11.823606   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:11.823611   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:11 GMT
	I0816 21:58:11.823615   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:11.823620   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:11.823743   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:11.824120   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:11.824134   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:11.824139   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:11.824143   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:11.825804   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:11.825822   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:11.825828   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:11.825833   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:11.825837   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:11.825841   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:11.825845   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:11 GMT
	I0816 21:58:11.825945   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:12.321525   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:12.321547   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:12.321553   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:12.321557   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:12.323884   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:12.323919   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:12.323925   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:12.323930   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:12.323935   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:12.323940   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:12.323945   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:12 GMT
	I0816 21:58:12.324043   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:12.324363   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:12.324376   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:12.324381   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:12.324385   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:12.325931   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:12.325946   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:12.325951   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:12.325954   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:12 GMT
	I0816 21:58:12.325959   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:12.325964   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:12.325971   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:12.326111   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:12.822050   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:12.822073   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:12.822079   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:12.822083   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:12.824343   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:12.824361   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:12.824367   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:12.824372   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:12 GMT
	I0816 21:58:12.824378   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:12.824383   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:12.824388   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:12.824503   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:12.824843   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:12.824857   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:12.824864   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:12.824868   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:12.826376   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:12.826390   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:12.826394   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:12 GMT
	I0816 21:58:12.826398   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:12.826403   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:12.826407   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:12.826413   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:12.826496   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:12.826748   71539 pod_ready.go:102] pod "coredns-558bd4d5db-h25nx" in "kube-system" namespace has status "Ready":"False"
	I0816 21:58:13.322150   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:13.322173   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:13.322178   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:13.322182   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:13.324541   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:13.324563   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:13.324572   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:13.324583   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:13.324588   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:13 GMT
	I0816 21:58:13.324594   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:13.324599   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:13.324757   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:13.325079   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:13.325093   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:13.325097   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:13.325103   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:13.326692   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:13.326710   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:13.326717   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:13.326721   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:13.326729   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:13.326734   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:13.326739   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:13 GMT
	I0816 21:58:13.326859   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:13.822395   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:13.822420   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:13.822426   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:13.822430   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:13.824417   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:13.824439   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:13.824445   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:13.824451   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:13.824458   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:13.824462   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:13.824466   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:13 GMT
	I0816 21:58:13.824550   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:13.824849   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:13.824861   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:13.824866   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:13.824870   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:13.826459   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:13.826477   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:13.826482   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:13.826487   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:13.826492   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:13.826496   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:13 GMT
	I0816 21:58:13.826499   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:13.826597   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:14.322187   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:14.322212   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:14.322218   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:14.322223   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:14.324689   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:14.324704   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:14.324708   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:14 GMT
	I0816 21:58:14.324711   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:14.324714   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:14.324717   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:14.324720   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:14.324800   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:14.325120   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:14.325132   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:14.325137   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:14.325141   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:14.326777   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:14.326796   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:14.326803   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:14.326808   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:14.326813   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:14.326817   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:14.326822   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:14 GMT
	I0816 21:58:14.326957   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:14.821445   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:14.821469   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:14.821475   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:14.821479   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:14.823628   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:14.823646   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:14.823651   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:14.823656   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:14 GMT
	I0816 21:58:14.823660   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:14.823665   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:14.823669   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:14.823759   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:14.824096   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:14.824109   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:14.824114   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:14.824118   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:14.825666   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:14.825682   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:14.825688   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:14.825692   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:14.825697   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:14.825701   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:14.825706   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:14 GMT
	I0816 21:58:14.825824   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:15.321436   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:15.321468   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:15.321476   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:15.321482   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:15.324061   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:15.324082   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:15.324089   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:15.324094   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:15.324098   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:15.324104   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:15.324108   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:15 GMT
	I0816 21:58:15.324190   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:15.324522   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:15.324536   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:15.324540   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:15.324544   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:15.326115   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:15.326145   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:15.326152   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:15.326159   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:15.326164   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:15.326171   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:15 GMT
	I0816 21:58:15.326176   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:15.326310   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:15.326609   71539 pod_ready.go:102] pod "coredns-558bd4d5db-h25nx" in "kube-system" namespace has status "Ready":"False"
	I0816 21:58:15.821443   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:15.821466   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:15.821472   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:15.821477   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:15.823436   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:15.823456   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:15.823463   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:15.823468   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:15.823472   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:15 GMT
	I0816 21:58:15.823475   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:15.823479   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:15.823580   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:15.823987   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:15.824004   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:15.824011   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:15.824017   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:15.825556   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:15.825570   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:15.825574   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:15.825578   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:15.825581   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:15.825584   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:15.825589   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:15 GMT
	I0816 21:58:15.825681   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:16.322280   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:16.322305   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:16.322311   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:16.322315   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:16.325060   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:16.325080   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:16.325087   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:16.325092   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:16.325096   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:16.325104   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:16.325109   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:16 GMT
	I0816 21:58:16.325260   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:16.325610   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:16.325623   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:16.325628   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:16.325632   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:16.327326   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:16.327342   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:16.327348   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:16.327352   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:16.327357   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:16 GMT
	I0816 21:58:16.327361   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:16.327366   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:16.327456   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:16.822078   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:16.822104   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:16.822112   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:16.822116   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:16.824329   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:16.824350   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:16.824356   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:16.824361   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:16.824366   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:16 GMT
	I0816 21:58:16.824371   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:16.824376   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:16.824460   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:16.824775   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:16.824790   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:16.824797   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:16.824802   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:16.826351   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:16.826368   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:16.826375   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:16.826380   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:16.826384   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:16.826390   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:16.826395   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:16 GMT
	I0816 21:58:16.826498   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:17.322150   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:17.322174   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:17.322180   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:17.322184   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:17.324397   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:17.324415   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:17.324421   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:17.324427   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:17.324432   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:17.324437   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:17.324442   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:17 GMT
	I0816 21:58:17.324594   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:17.325088   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:17.325110   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:17.325117   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:17.325123   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:17.326702   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:17.326715   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:17.326719   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:17.326722   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:17.326725   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:17 GMT
	I0816 21:58:17.326728   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:17.326731   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:17.326824   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:17.327025   71539 pod_ready.go:102] pod "coredns-558bd4d5db-h25nx" in "kube-system" namespace has status "Ready":"False"
	I0816 21:58:17.821664   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:17.821691   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:17.821699   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:17.821705   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:17.823790   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:17.823806   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:17.823811   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:17.823814   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:17.823817   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:17.823820   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:17.823823   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:17 GMT
	I0816 21:58:17.823969   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:17.824328   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:17.824343   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:17.824349   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:17.824353   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:17.825992   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:17.826015   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:17.826019   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:17 GMT
	I0816 21:58:17.826022   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:17.826025   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:17.826033   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:17.826040   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:17.826146   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:18.321654   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:18.321680   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:18.321691   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:18.321695   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:18.324032   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:18.324059   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:18.324065   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:18.324070   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:18.324074   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:18.324078   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:18.324083   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:18 GMT
	I0816 21:58:18.324228   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:18.324754   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:18.324776   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:18.324783   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:18.324789   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:18.326392   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:18.326411   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:18.326416   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:18 GMT
	I0816 21:58:18.326421   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:18.326425   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:18.326430   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:18.326433   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:18.326525   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:18.822123   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:18.822147   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:18.822153   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:18.822157   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:18.824308   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:18.824327   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:18.824336   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:18.824341   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:18.824345   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:18 GMT
	I0816 21:58:18.824350   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:18.824355   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:18.824439   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:18.824759   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:18.824774   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:18.824783   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:18.824789   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:18.828518   71539 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0816 21:58:18.828536   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:18.828541   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:18.828544   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:18.828550   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:18.828553   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:18.828558   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:18 GMT
	I0816 21:58:18.828658   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:19.322309   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:19.322336   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:19.322342   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:19.322346   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:19.324677   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:19.324698   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:19.324705   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:19.324709   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:19 GMT
	I0816 21:58:19.324712   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:19.324715   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:19.324718   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:19.324839   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:19.325231   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:19.325247   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:19.325251   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:19.325255   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:19.327010   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:19.327030   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:19.327036   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:19.327041   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:19.327046   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:19.327051   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:19 GMT
	I0816 21:58:19.327055   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:19.327150   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:19.327417   71539 pod_ready.go:102] pod "coredns-558bd4d5db-h25nx" in "kube-system" namespace has status "Ready":"False"
	I0816 21:58:19.821850   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:19.821871   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:19.821877   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:19.821881   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:19.823674   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:19.823695   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:19.823702   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:19.823706   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:19.823711   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:19.823714   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:19.823717   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:19 GMT
	I0816 21:58:19.823789   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:19.824160   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:19.824174   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:19.824179   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:19.824183   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:19.825756   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:19.825774   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:19.825780   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:19.825785   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:19 GMT
	I0816 21:58:19.825790   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:19.825794   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:19.825799   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:19.825907   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:20.321642   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:20.321666   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:20.321671   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:20.321676   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:20.324057   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:20.324078   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:20.324084   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:20.324089   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:20.324094   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:20 GMT
	I0816 21:58:20.324099   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:20.324102   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:20.324748   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:20.325076   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:20.325090   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:20.325094   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:20.325098   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:20.326682   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:20.326698   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:20.326704   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:20.326709   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:20.326713   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:20 GMT
	I0816 21:58:20.326717   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:20.326721   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:20.326811   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:20.821406   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:20.821427   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:20.821433   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:20.821437   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:20.823571   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:20.823595   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:20.823601   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:20.823606   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:20 GMT
	I0816 21:58:20.823610   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:20.823614   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:20.823617   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:20.823735   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:20.824100   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:20.824114   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:20.824120   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:20.824124   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:20.825743   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:20.825762   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:20.825767   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:20.825772   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:20.825776   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:20.825781   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:20.825786   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:20 GMT
	I0816 21:58:20.825883   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:21.321428   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:21.321451   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:21.321456   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:21.321461   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:21.324234   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:21.324257   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:21.324265   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:21.324270   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:21.324274   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:21.324280   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:21.324286   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:21 GMT
	I0816 21:58:21.324426   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:21.324840   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:21.324862   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:21.324869   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:21.324876   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:21.326603   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:21.326637   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:21.326642   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:21.326649   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:21.326652   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:21.326655   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:21.326659   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:21 GMT
	I0816 21:58:21.326741   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:21.822361   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:21.822389   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:21.822397   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:21.822403   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:21.824266   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:21.824290   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:21.824298   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:21.824303   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:21 GMT
	I0816 21:58:21.824308   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:21.824313   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:21.824318   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:21.824421   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"465","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5626 chars]
	I0816 21:58:21.824754   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:21.824767   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:21.824772   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:21.824776   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:21.826288   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:21.826307   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:21.826313   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:21.826321   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:21.826325   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:21.826329   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:21.826333   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:21 GMT
	I0816 21:58:21.826440   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:21.826674   71539 pod_ready.go:102] pod "coredns-558bd4d5db-h25nx" in "kube-system" namespace has status "Ready":"False"
	I0816 21:58:22.322053   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:22.322081   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:22.322087   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:22.322091   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:22.324322   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:22.324339   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:22.324344   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:22.324347   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:22.324350   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:22 GMT
	I0816 21:58:22.324354   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:22.324357   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:22.324447   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"519","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5734 chars]
	I0816 21:58:22.324839   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:22.324854   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:22.324859   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:22.324863   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:22.326486   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:22.326505   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:22.326510   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:22 GMT
	I0816 21:58:22.326513   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:22.326516   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:22.326519   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:22.326522   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:22.326618   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:22.326879   71539 pod_ready.go:92] pod "coredns-558bd4d5db-h25nx" in "kube-system" namespace has status "Ready":"True"
	I0816 21:58:22.326897   71539 pod_ready.go:81] duration metric: took 19.010215027s waiting for pod "coredns-558bd4d5db-h25nx" in "kube-system" namespace to be "Ready" ...
	I0816 21:58:22.326908   71539 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20210816215712-6487" in "kube-system" namespace to be "Ready" ...
	I0816 21:58:22.326965   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210816215712-6487
	I0816 21:58:22.326973   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:22.326978   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:22.326986   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:22.328667   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:22.328687   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:22.328693   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:22.328698   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:22.328702   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:22 GMT
	I0816 21:58:22.328706   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:22.328709   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:22.328871   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210816215712-6487","namespace":"kube-system","uid":"6ddb85fd-8e82-415f-bdff-c0ae7b4bf5cd","resourceVersion":"381","creationTimestamp":"2021-08-16T21:57:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"8bb80161fdca904f4e120a48ecc38525","kubernetes.io/config.mirror":"8bb80161fdca904f4e120a48ecc38525","kubernetes.io/config.seen":"2021-08-16T21:57:42.759757686Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kuber
netes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash":{ [truncated 5554 chars]
	I0816 21:58:22.329192   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:22.329207   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:22.329213   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:22.329219   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:22.330595   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:22.330609   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:22.330614   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:22.330617   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:22 GMT
	I0816 21:58:22.330620   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:22.330623   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:22.330626   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:22.330748   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:22.330988   71539 pod_ready.go:92] pod "etcd-multinode-20210816215712-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 21:58:22.330999   71539 pod_ready.go:81] duration metric: took 4.080132ms waiting for pod "etcd-multinode-20210816215712-6487" in "kube-system" namespace to be "Ready" ...
	I0816 21:58:22.331011   71539 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20210816215712-6487" in "kube-system" namespace to be "Ready" ...
	I0816 21:58:22.331069   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210816215712-6487
	I0816 21:58:22.331083   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:22.331088   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:22.331096   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:22.332642   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:22.332657   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:22.332662   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:22.332665   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:22.332668   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:22.332671   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:22.332674   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:22 GMT
	I0816 21:58:22.332803   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20210816215712-6487","namespace":"kube-system","uid":"38249ebf-4ebe-4baa-ba19-dcf8adfa19dc","resourceVersion":"324","creationTimestamp":"2021-08-16T21:57:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"ea00c4e672ad786e7f4086914a3c8804","kubernetes.io/config.mirror":"ea00c4e672ad786e7f4086914a3c8804","kubernetes.io/config.seen":"2021-08-16T21:57:42.759771660Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address.en [truncated 8085 chars]
	I0816 21:58:22.333079   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:22.333090   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:22.333095   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:22.333098   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:22.334581   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:22.334591   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:22.334595   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:22 GMT
	I0816 21:58:22.334598   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:22.334601   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:22.334604   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:22.334607   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:22.334723   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:22.334925   71539 pod_ready.go:92] pod "kube-apiserver-multinode-20210816215712-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 21:58:22.334935   71539 pod_ready.go:81] duration metric: took 3.911613ms waiting for pod "kube-apiserver-multinode-20210816215712-6487" in "kube-system" namespace to be "Ready" ...
	I0816 21:58:22.334943   71539 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20210816215712-6487" in "kube-system" namespace to be "Ready" ...
	I0816 21:58:22.334982   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210816215712-6487
	I0816 21:58:22.334989   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:22.334994   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:22.335002   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:22.336343   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:22.336356   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:22.336360   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:22.336363   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:22 GMT
	I0816 21:58:22.336366   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:22.336369   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:22.336372   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:22.336502   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210816215712-6487","namespace":"kube-system","uid":"c43c4499-421c-44d1-bde9-7711292c7ab6","resourceVersion":"382","creationTimestamp":"2021-08-16T21:57:43Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3fa1b844ada1f70b8ddc6c136b566f22","kubernetes.io/config.mirror":"3fa1b844ada1f70b8ddc6c136b566f22","kubernetes.io/config.seen":"2021-08-16T21:57:42.759773088Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.
mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config.sou [truncated 7651 chars]
	I0816 21:58:22.336788   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:22.336801   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:22.336806   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:22.336810   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:22.338126   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:22.338140   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:22.338146   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:22.338151   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:22.338155   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:22.338160   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:22.338164   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:22 GMT
	I0816 21:58:22.338246   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:22.338437   71539 pod_ready.go:92] pod "kube-controller-manager-multinode-20210816215712-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 21:58:22.338447   71539 pod_ready.go:81] duration metric: took 3.498305ms waiting for pod "kube-controller-manager-multinode-20210816215712-6487" in "kube-system" namespace to be "Ready" ...
	I0816 21:58:22.338454   71539 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-22rzz" in "kube-system" namespace to be "Ready" ...
	I0816 21:58:22.338492   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-22rzz
	I0816 21:58:22.338499   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:22.338503   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:22.338509   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:22.339814   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:22.339839   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:22.339843   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:22 GMT
	I0816 21:58:22.339847   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:22.339850   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:22.339853   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:22.339855   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:22.339937   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-22rzz","generateName":"kube-proxy-","namespace":"kube-system","uid":"09fc57f9-2322-4194-a28f-9f43e4cfd094","resourceVersion":"482","creationTimestamp":"2021-08-16T21:57:54Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fd7b7440-430e-48b2-bb5a-4544d8034ddd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fd7b7440-430e-48b2-bb5a-4544d8034ddd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5752 chars]
	I0816 21:58:22.340183   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:22.340194   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:22.340199   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:22.340203   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:22.341441   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:22.341456   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:22.341462   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:22.341467   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:22.341481   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:22.341486   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:22.341494   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:22 GMT
	I0816 21:58:22.341571   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:22.341755   71539 pod_ready.go:92] pod "kube-proxy-22rzz" in "kube-system" namespace has status "Ready":"True"
	I0816 21:58:22.341765   71539 pod_ready.go:81] duration metric: took 3.30468ms waiting for pod "kube-proxy-22rzz" in "kube-system" namespace to be "Ready" ...
	I0816 21:58:22.341772   71539 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20210816215712-6487" in "kube-system" namespace to be "Ready" ...
	I0816 21:58:22.523076   71539 request.go:600] Waited for 181.257946ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210816215712-6487
	I0816 21:58:22.523126   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210816215712-6487
	I0816 21:58:22.523131   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:22.523138   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:22.523142   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:22.525220   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:22.525254   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:22.525259   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:22 GMT
	I0816 21:58:22.525262   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:22.525265   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:22.525269   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:22.525275   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:22.525389   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210816215712-6487","namespace":"kube-system","uid":"401c67bf-102e-471d-853c-8f6d512b12ba","resourceVersion":"362","creationTimestamp":"2021-08-16T21:57:43Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"955eb76105b940acda068885f974ae80","kubernetes.io/config.mirror":"955eb76105b940acda068885f974ae80","kubernetes.io/config.seen":"2021-08-16T21:57:42.759774136Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kube
rnetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels [truncated 4533 chars]
	I0816 21:58:22.723018   71539 request.go:600] Waited for 197.331968ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:22.723089   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:22.723095   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:22.723100   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:22.723106   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:22.725296   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:22.725317   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:22.725321   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:22 GMT
	I0816 21:58:22.725325   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:22.725328   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:22.725332   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:22.725335   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:22.725436   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:22.725697   71539 pod_ready.go:92] pod "kube-scheduler-multinode-20210816215712-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 21:58:22.725709   71539 pod_ready.go:81] duration metric: took 383.929632ms waiting for pod "kube-scheduler-multinode-20210816215712-6487" in "kube-system" namespace to be "Ready" ...
	I0816 21:58:22.725715   71539 pod_ready.go:38] duration metric: took 26.980620484s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 21:58:22.725735   71539 api_server.go:50] waiting for apiserver process to appear ...
	I0816 21:58:22.725782   71539 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 21:58:22.745495   71539 command_runner.go:124] > 1321
	I0816 21:58:22.746171   71539 api_server.go:70] duration metric: took 27.091233036s to wait for apiserver process to appear ...
	I0816 21:58:22.746187   71539 api_server.go:86] waiting for apiserver healthz status ...
	I0816 21:58:22.746194   71539 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0816 21:58:22.750440   71539 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0816 21:58:22.750506   71539 round_trippers.go:432] GET https://192.168.49.2:8443/version?timeout=32s
	I0816 21:58:22.750515   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:22.750520   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:22.750524   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:22.751161   71539 round_trippers.go:457] Response Status: 200 OK in 0 milliseconds
	I0816 21:58:22.751177   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:22.751182   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:22.751185   71539 round_trippers.go:463]     Content-Length: 263
	I0816 21:58:22.751188   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:22 GMT
	I0816 21:58:22.751191   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:22.751198   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:22.751203   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:22.751237   71539 request.go:1123] Response Body: {
	  "major": "1",
	  "minor": "21",
	  "gitVersion": "v1.21.3",
	  "gitCommit": "ca643a4d1f7bfe34773c74f79527be4afd95bf39",
	  "gitTreeState": "clean",
	  "buildDate": "2021-07-15T20:59:07Z",
	  "goVersion": "go1.16.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0816 21:58:22.751329   71539 api_server.go:139] control plane version: v1.21.3
	I0816 21:58:22.751345   71539 api_server.go:129] duration metric: took 5.153169ms to wait for apiserver health ...
	I0816 21:58:22.751354   71539 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 21:58:22.922712   71539 request.go:600] Waited for 171.29729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0816 21:58:22.922777   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0816 21:58:22.922788   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:22.922797   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:22.922806   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:22.925689   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:22.925710   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:22.925717   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:22 GMT
	I0816 21:58:22.925721   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:22.925726   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:22.925730   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:22.925735   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:22.926258   71539 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"523"},"items":[{"metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"519","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 54488 chars]
	I0816 21:58:22.927489   71539 system_pods.go:59] 8 kube-system pods found
	I0816 21:58:22.927513   71539 system_pods.go:61] "coredns-558bd4d5db-h25nx" [91c50f30-030e-467f-a926-607b16ac148d] Running
	I0816 21:58:22.927518   71539 system_pods.go:61] "etcd-multinode-20210816215712-6487" [6ddb85fd-8e82-415f-bdff-c0ae7b4bf5cd] Running
	I0816 21:58:22.927523   71539 system_pods.go:61] "kindnet-qtjn7" [59b79996-21e8-4428-8747-7860b1109cd5] Running
	I0816 21:58:22.927527   71539 system_pods.go:61] "kube-apiserver-multinode-20210816215712-6487" [38249ebf-4ebe-4baa-ba19-dcf8adfa19dc] Running
	I0816 21:58:22.927537   71539 system_pods.go:61] "kube-controller-manager-multinode-20210816215712-6487" [c43c4499-421c-44d1-bde9-7711292c7ab6] Running
	I0816 21:58:22.927543   71539 system_pods.go:61] "kube-proxy-22rzz" [09fc57f9-2322-4194-a28f-9f43e4cfd094] Running
	I0816 21:58:22.927546   71539 system_pods.go:61] "kube-scheduler-multinode-20210816215712-6487" [401c67bf-102e-471d-853c-8f6d512b12ba] Running
	I0816 21:58:22.927550   71539 system_pods.go:61] "storage-provisioner" [6fdeb906-34a1-4a95-9a66-4d7ec70d33c9] Running
	I0816 21:58:22.927556   71539 system_pods.go:74] duration metric: took 176.193531ms to wait for pod list to return data ...
	I0816 21:58:22.927569   71539 default_sa.go:34] waiting for default service account to be created ...
	I0816 21:58:23.123016   71539 request.go:600] Waited for 195.382514ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0816 21:58:23.123094   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0816 21:58:23.123117   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:23.123123   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:23.123130   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:23.125284   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:23.125315   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:23.125320   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:23 GMT
	I0816 21:58:23.125324   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:23.125327   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:23.125330   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:23.125333   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:23.125336   71539 round_trippers.go:463]     Content-Length: 304
	I0816 21:58:23.125351   71539 request.go:1123] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"523"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"dee982ec-594c-4d59-98ca-5727be495ae2","resourceVersion":"443","creationTimestamp":"2021-08-16T21:57:54Z"},"secrets":[{"name":"default-token-6jkdj"}]}]}
	I0816 21:58:23.125854   71539 default_sa.go:45] found service account: "default"
	I0816 21:58:23.125868   71539 default_sa.go:55] duration metric: took 198.292409ms for default service account to be created ...
	I0816 21:58:23.125875   71539 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 21:58:23.322202   71539 request.go:600] Waited for 196.261344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0816 21:58:23.322254   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0816 21:58:23.322260   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:23.322265   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:23.322270   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:23.325239   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:23.325265   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:23.325272   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:23.325277   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:23.325281   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:23.325285   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:23.325290   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:23 GMT
	I0816 21:58:23.325780   71539 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"523"},"items":[{"metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"519","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 54488 chars]
	I0816 21:58:23.327002   71539 system_pods.go:86] 8 kube-system pods found
	I0816 21:58:23.327023   71539 system_pods.go:89] "coredns-558bd4d5db-h25nx" [91c50f30-030e-467f-a926-607b16ac148d] Running
	I0816 21:58:23.327029   71539 system_pods.go:89] "etcd-multinode-20210816215712-6487" [6ddb85fd-8e82-415f-bdff-c0ae7b4bf5cd] Running
	I0816 21:58:23.327033   71539 system_pods.go:89] "kindnet-qtjn7" [59b79996-21e8-4428-8747-7860b1109cd5] Running
	I0816 21:58:23.327038   71539 system_pods.go:89] "kube-apiserver-multinode-20210816215712-6487" [38249ebf-4ebe-4baa-ba19-dcf8adfa19dc] Running
	I0816 21:58:23.327043   71539 system_pods.go:89] "kube-controller-manager-multinode-20210816215712-6487" [c43c4499-421c-44d1-bde9-7711292c7ab6] Running
	I0816 21:58:23.327048   71539 system_pods.go:89] "kube-proxy-22rzz" [09fc57f9-2322-4194-a28f-9f43e4cfd094] Running
	I0816 21:58:23.327053   71539 system_pods.go:89] "kube-scheduler-multinode-20210816215712-6487" [401c67bf-102e-471d-853c-8f6d512b12ba] Running
	I0816 21:58:23.327058   71539 system_pods.go:89] "storage-provisioner" [6fdeb906-34a1-4a95-9a66-4d7ec70d33c9] Running
	I0816 21:58:23.327064   71539 system_pods.go:126] duration metric: took 201.184377ms to wait for k8s-apps to be running ...
	I0816 21:58:23.327075   71539 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 21:58:23.327114   71539 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 21:58:23.337383   71539 system_svc.go:56] duration metric: took 10.303563ms WaitForService to wait for kubelet.
	I0816 21:58:23.337403   71539 kubeadm.go:547] duration metric: took 27.682467618s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 21:58:23.337422   71539 node_conditions.go:102] verifying NodePressure condition ...
	I0816 21:58:23.522843   71539 request.go:600] Waited for 185.337135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0816 21:58:23.522893   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes
	I0816 21:58:23.522899   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:23.522905   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:23.522909   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:23.525086   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:23.525102   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:23.525108   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:23 GMT
	I0816 21:58:23.525116   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:23.525124   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:23.525131   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:23.525136   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:23.525233   71539 request.go:1123] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"523"},"items":[{"metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-at
tach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation": [truncated 6649 chars]
	I0816 21:58:23.526166   71539 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 21:58:23.526186   71539 node_conditions.go:123] node cpu capacity is 8
	I0816 21:58:23.526198   71539 node_conditions.go:105] duration metric: took 188.772765ms to run NodePressure ...
	I0816 21:58:23.526211   71539 start.go:231] waiting for startup goroutines ...
	I0816 21:58:23.528530   71539 out.go:177] 
	I0816 21:58:23.528725   71539 config.go:177] Loaded profile config "multinode-20210816215712-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 21:58:23.528815   71539 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/config.json ...
	I0816 21:58:23.531028   71539 out.go:177] * Starting node multinode-20210816215712-6487-m02 in cluster multinode-20210816215712-6487
	I0816 21:58:23.531054   71539 cache.go:117] Beginning downloading kic base image for docker with crio
	I0816 21:58:23.532517   71539 out.go:177] * Pulling base image ...
	I0816 21:58:23.532536   71539 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0816 21:58:23.532546   71539 cache.go:56] Caching tarball of preloaded images
	I0816 21:58:23.532615   71539 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0816 21:58:23.532653   71539 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 21:58:23.532675   71539 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0816 21:58:23.532742   71539 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/config.json ...
	I0816 21:58:23.618075   71539 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0816 21:58:23.618111   71539 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0816 21:58:23.618126   71539 cache.go:205] Successfully downloaded all kic artifacts
	I0816 21:58:23.618164   71539 start.go:313] acquiring machines lock for multinode-20210816215712-6487-m02: {Name:mke3851d023c8c08d2b6f87e8fe8140a538f6d98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 21:58:23.618306   71539 start.go:317] acquired machines lock for "multinode-20210816215712-6487-m02" in 117.826µs
	I0816 21:58:23.618337   71539 start.go:89] Provisioning new machine with config: &{Name:multinode-20210816215712-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210816215712-6487 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}
	I0816 21:58:23.618430   71539 start.go:126] createHost starting for "m02" (driver="docker")
	I0816 21:58:23.621149   71539 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0816 21:58:23.621255   71539 start.go:160] libmachine.API.Create for "multinode-20210816215712-6487" (driver="docker")
	I0816 21:58:23.621286   71539 client.go:168] LocalClient.Create starting
	I0816 21:58:23.621390   71539 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem
	I0816 21:58:23.621427   71539 main.go:130] libmachine: Decoding PEM data...
	I0816 21:58:23.621451   71539 main.go:130] libmachine: Parsing certificate...
	I0816 21:58:23.621558   71539 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem
	I0816 21:58:23.621580   71539 main.go:130] libmachine: Decoding PEM data...
	I0816 21:58:23.621596   71539 main.go:130] libmachine: Parsing certificate...
	I0816 21:58:23.621866   71539 cli_runner.go:115] Run: docker network inspect multinode-20210816215712-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 21:58:23.658139   71539 network_create.go:67] Found existing network {name:multinode-20210816215712-6487 subnet:0xc001102b10 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0816 21:58:23.658178   71539 kic.go:106] calculated static IP "192.168.49.3" for the "multinode-20210816215712-6487-m02" container
	I0816 21:58:23.658228   71539 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0816 21:58:23.694373   71539 cli_runner.go:115] Run: docker volume create multinode-20210816215712-6487-m02 --label name.minikube.sigs.k8s.io=multinode-20210816215712-6487-m02 --label created_by.minikube.sigs.k8s.io=true
	I0816 21:58:23.731230   71539 oci.go:102] Successfully created a docker volume multinode-20210816215712-6487-m02
	I0816 21:58:23.731296   71539 cli_runner.go:115] Run: docker run --rm --name multinode-20210816215712-6487-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20210816215712-6487-m02 --entrypoint /usr/bin/test -v multinode-20210816215712-6487-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0816 21:58:24.487221   71539 oci.go:106] Successfully prepared a docker volume multinode-20210816215712-6487-m02
	W0816 21:58:24.487282   71539 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0816 21:58:24.487291   71539 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0816 21:58:24.487379   71539 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0816 21:58:24.487294   71539 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0816 21:58:24.487435   71539 kic.go:179] Starting extracting preloaded images to volume ...
	I0816 21:58:24.487493   71539 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20210816215712-6487-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0816 21:58:24.565749   71539 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-20210816215712-6487-m02 --name multinode-20210816215712-6487-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20210816215712-6487-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-20210816215712-6487-m02 --network multinode-20210816215712-6487 --ip 192.168.49.3 --volume multinode-20210816215712-6487-m02:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0816 21:58:25.037646   71539 cli_runner.go:115] Run: docker container inspect multinode-20210816215712-6487-m02 --format={{.State.Running}}
	I0816 21:58:25.083779   71539 cli_runner.go:115] Run: docker container inspect multinode-20210816215712-6487-m02 --format={{.State.Status}}
	I0816 21:58:25.126211   71539 cli_runner.go:115] Run: docker exec multinode-20210816215712-6487-m02 stat /var/lib/dpkg/alternatives/iptables
	I0816 21:58:25.270569   71539 oci.go:278] the created container "multinode-20210816215712-6487-m02" has a running status.
	I0816 21:58:25.270612   71539 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/multinode-20210816215712-6487-m02/id_rsa...
	I0816 21:58:25.710117   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/multinode-20210816215712-6487-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0816 21:58:25.710159   71539 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/multinode-20210816215712-6487-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0816 21:58:26.086015   71539 cli_runner.go:115] Run: docker container inspect multinode-20210816215712-6487-m02 --format={{.State.Status}}
	I0816 21:58:26.124086   71539 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0816 21:58:26.124129   71539 kic_runner.go:115] Args: [docker exec --privileged multinode-20210816215712-6487-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0816 21:58:27.892614   71539 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20210816215712-6487-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (3.405077978s)
	I0816 21:58:27.892645   71539 kic.go:188] duration metric: took 3.405209 seconds to extract preloaded images to volume
	I0816 21:58:27.892715   71539 cli_runner.go:115] Run: docker container inspect multinode-20210816215712-6487-m02 --format={{.State.Status}}
	I0816 21:58:27.929887   71539 machine.go:88] provisioning docker machine ...
	I0816 21:58:27.929931   71539 ubuntu.go:169] provisioning hostname "multinode-20210816215712-6487-m02"
	I0816 21:58:27.930002   71539 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210816215712-6487-m02
	I0816 21:58:27.967373   71539 main.go:130] libmachine: Using SSH client type: native
	I0816 21:58:27.967533   71539 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32812 <nil> <nil>}
	I0816 21:58:27.967547   71539 main.go:130] libmachine: About to run SSH command:
	sudo hostname multinode-20210816215712-6487-m02 && echo "multinode-20210816215712-6487-m02" | sudo tee /etc/hostname
	I0816 21:58:28.103321   71539 main.go:130] libmachine: SSH cmd err, output: <nil>: multinode-20210816215712-6487-m02
	
	I0816 21:58:28.103396   71539 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210816215712-6487-m02
	I0816 21:58:28.140386   71539 main.go:130] libmachine: Using SSH client type: native
	I0816 21:58:28.140543   71539 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32812 <nil> <nil>}
	I0816 21:58:28.140572   71539 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20210816215712-6487-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20210816215712-6487-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20210816215712-6487-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 21:58:28.263163   71539 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 21:58:28.263188   71539 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem ServerCertRemotePath:/e
tc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 21:58:28.263206   71539 ubuntu.go:177] setting up certificates
	I0816 21:58:28.263215   71539 provision.go:83] configureAuth start
	I0816 21:58:28.263257   71539 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210816215712-6487-m02
	I0816 21:58:28.300110   71539 provision.go:138] copyHostCerts
	I0816 21:58:28.300146   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 21:58:28.300174   71539 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 21:58:28.300187   71539 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 21:58:28.300243   71539 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 21:58:28.300299   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 21:58:28.300323   71539 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 21:58:28.300330   71539 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 21:58:28.300350   71539 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 21:58:28.300385   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 21:58:28.300401   71539 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 21:58:28.300407   71539 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 21:58:28.300425   71539 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0816 21:58:28.300459   71539 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.multinode-20210816215712-6487-m02 san=[192.168.49.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-20210816215712-6487-m02]
	I0816 21:58:28.473855   71539 provision.go:172] copyRemoteCerts
	I0816 21:58:28.473910   71539 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 21:58:28.473943   71539 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210816215712-6487-m02
	I0816 21:58:28.512225   71539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32812 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/multinode-20210816215712-6487-m02/id_rsa Username:docker}
	I0816 21:58:28.612676   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 21:58:28.612732   71539 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 21:58:28.628387   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 21:58:28.628425   71539 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0816 21:58:28.643163   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 21:58:28.643197   71539 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 21:58:28.657862   71539 provision.go:86] duration metric: configureAuth took 394.640655ms
	I0816 21:58:28.657879   71539 ubuntu.go:193] setting minikube options for container-runtime
	I0816 21:58:28.658009   71539 config.go:177] Loaded profile config "multinode-20210816215712-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 21:58:28.658121   71539 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210816215712-6487-m02
	I0816 21:58:28.695158   71539 main.go:130] libmachine: Using SSH client type: native
	I0816 21:58:28.695307   71539 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32812 <nil> <nil>}
	I0816 21:58:28.695325   71539 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 21:58:29.053557   71539 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 21:58:29.053598   71539 machine.go:91] provisioned docker machine in 1.123685529s
	I0816 21:58:29.053610   71539 client.go:171] LocalClient.Create took 5.432315151s
	I0816 21:58:29.053629   71539 start.go:168] duration metric: libmachine.API.Create for "multinode-20210816215712-6487" took 5.432371412s
	I0816 21:58:29.053640   71539 start.go:267] post-start starting for "multinode-20210816215712-6487-m02" (driver="docker")
	I0816 21:58:29.053647   71539 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 21:58:29.053701   71539 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 21:58:29.053734   71539 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210816215712-6487-m02
	I0816 21:58:29.090635   71539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32812 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/multinode-20210816215712-6487-m02/id_rsa Username:docker}
	I0816 21:58:29.178830   71539 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 21:58:29.181341   71539 command_runner.go:124] > NAME="Ubuntu"
	I0816 21:58:29.181360   71539 command_runner.go:124] > VERSION="20.04.2 LTS (Focal Fossa)"
	I0816 21:58:29.181366   71539 command_runner.go:124] > ID=ubuntu
	I0816 21:58:29.181373   71539 command_runner.go:124] > ID_LIKE=debian
	I0816 21:58:29.181381   71539 command_runner.go:124] > PRETTY_NAME="Ubuntu 20.04.2 LTS"
	I0816 21:58:29.181388   71539 command_runner.go:124] > VERSION_ID="20.04"
	I0816 21:58:29.181400   71539 command_runner.go:124] > HOME_URL="https://www.ubuntu.com/"
	I0816 21:58:29.181411   71539 command_runner.go:124] > SUPPORT_URL="https://help.ubuntu.com/"
	I0816 21:58:29.181422   71539 command_runner.go:124] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0816 21:58:29.181438   71539 command_runner.go:124] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0816 21:58:29.181452   71539 command_runner.go:124] > VERSION_CODENAME=focal
	I0816 21:58:29.181459   71539 command_runner.go:124] > UBUNTU_CODENAME=focal
	I0816 21:58:29.181519   71539 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0816 21:58:29.181539   71539 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0816 21:58:29.181552   71539 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0816 21:58:29.181565   71539 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0816 21:58:29.181579   71539 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 21:58:29.181630   71539 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 21:58:29.181724   71539 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem -> 64872.pem in /etc/ssl/certs
	I0816 21:58:29.181736   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem -> /etc/ssl/certs/64872.pem
	I0816 21:58:29.181837   71539 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 21:58:29.187707   71539 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /etc/ssl/certs/64872.pem (1708 bytes)
	I0816 21:58:29.202987   71539 start.go:270] post-start completed in 149.336063ms
	I0816 21:58:29.203275   71539 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210816215712-6487-m02
	I0816 21:58:29.240806   71539 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/config.json ...
	I0816 21:58:29.241027   71539 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 21:58:29.241077   71539 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210816215712-6487-m02
	I0816 21:58:29.279517   71539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32812 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/multinode-20210816215712-6487-m02/id_rsa Username:docker}
	I0816 21:58:29.363545   71539 command_runner.go:124] > 29%!
	(MISSING)I0816 21:58:29.363749   71539 start.go:129] duration metric: createHost completed in 5.745305476s
	I0816 21:58:29.363769   71539 start.go:80] releasing machines lock for "multinode-20210816215712-6487-m02", held for 5.745447382s
	I0816 21:58:29.363850   71539 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210816215712-6487-m02
	I0816 21:58:29.403247   71539 out.go:177] * Found network options:
	I0816 21:58:29.404801   71539 out.go:177]   - NO_PROXY=192.168.49.2
	W0816 21:58:29.404844   71539 proxy.go:118] fail to check proxy env: Error ip not in block
	W0816 21:58:29.404884   71539 proxy.go:118] fail to check proxy env: Error ip not in block
	I0816 21:58:29.404999   71539 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0816 21:58:29.405014   71539 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 21:58:29.405054   71539 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210816215712-6487-m02
	I0816 21:58:29.405060   71539 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210816215712-6487-m02
	I0816 21:58:29.444800   71539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32812 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/multinode-20210816215712-6487-m02/id_rsa Username:docker}
	I0816 21:58:29.453542   71539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32812 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/multinode-20210816215712-6487-m02/id_rsa Username:docker}
	I0816 21:58:29.545924   71539 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0816 21:58:29.570672   71539 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0816 21:58:29.570698   71539 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0816 21:58:29.570712   71539 command_runner.go:124] > <H1>302 Moved</H1>
	I0816 21:58:29.570717   71539 command_runner.go:124] > The document has moved
	I0816 21:58:29.570723   71539 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0816 21:58:29.570729   71539 command_runner.go:124] > </BODY></HTML>
	I0816 21:58:29.571885   71539 docker.go:153] disabling docker service ...
	I0816 21:58:29.571952   71539 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 21:58:29.580664   71539 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 21:58:29.588337   71539 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 21:58:29.651016   71539 command_runner.go:124] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0816 21:58:29.651079   71539 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 21:58:29.659498   71539 command_runner.go:124] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0816 21:58:29.714487   71539 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 21:58:29.722645   71539 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 21:58:29.733120   71539 command_runner.go:124] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0816 21:58:29.733141   71539 command_runner.go:124] > image-endpoint: unix:///var/run/crio/crio.sock
	I0816 21:58:29.733829   71539 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0816 21:58:29.740624   71539 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0816 21:58:29.740649   71539 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0816 21:58:29.747536   71539 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 21:58:29.752973   71539 command_runner.go:124] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 21:58:29.753012   71539 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 21:58:29.753047   71539 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 21:58:29.759672   71539 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 21:58:29.765147   71539 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 21:58:29.823436   71539 ssh_runner.go:149] Run: sudo systemctl start crio
	I0816 21:58:29.831945   71539 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 21:58:29.831991   71539 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 21:58:29.834634   71539 command_runner.go:124] >   File: /var/run/crio/crio.sock
	I0816 21:58:29.834657   71539 command_runner.go:124] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0816 21:58:29.834667   71539 command_runner.go:124] > Device: afh/175d	Inode: 387257      Links: 1
	I0816 21:58:29.834674   71539 command_runner.go:124] > Access: (0755/srwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0816 21:58:29.834679   71539 command_runner.go:124] > Access: 2021-08-16 21:58:29.041007928 +0000
	I0816 21:58:29.834685   71539 command_runner.go:124] > Modify: 2021-08-16 21:58:29.041007928 +0000
	I0816 21:58:29.834693   71539 command_runner.go:124] > Change: 2021-08-16 21:58:29.041007928 +0000
	I0816 21:58:29.834697   71539 command_runner.go:124] >  Birth: -
	I0816 21:58:29.834712   71539 start.go:413] Will wait 60s for crictl version
	I0816 21:58:29.834747   71539 ssh_runner.go:149] Run: sudo crictl version
	I0816 21:58:29.866860   71539 command_runner.go:124] > Version:  0.1.0
	I0816 21:58:29.866879   71539 command_runner.go:124] > RuntimeName:  cri-o
	I0816 21:58:29.866886   71539 command_runner.go:124] > RuntimeVersion:  1.20.3
	I0816 21:58:29.866892   71539 command_runner.go:124] > RuntimeApiVersion:  v1alpha1
	I0816 21:58:29.866908   71539 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0816 21:58:29.866963   71539 ssh_runner.go:149] Run: crio --version
	I0816 21:58:29.924453   71539 command_runner.go:124] ! time="2021-08-16T21:58:29Z" level=info msg="Starting CRI-O, version: 1.20.3, git: 50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d(clean)"
	I0816 21:58:29.926385   71539 command_runner.go:124] > crio version 1.20.3
	I0816 21:58:29.926404   71539 command_runner.go:124] > Version:       1.20.3
	I0816 21:58:29.926411   71539 command_runner.go:124] > GitCommit:     50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d
	I0816 21:58:29.926417   71539 command_runner.go:124] > GitTreeState:  clean
	I0816 21:58:29.926423   71539 command_runner.go:124] > BuildDate:     2021-07-14T23:38:00Z
	I0816 21:58:29.926427   71539 command_runner.go:124] > GoVersion:     go1.15.2
	I0816 21:58:29.926431   71539 command_runner.go:124] > Compiler:      gc
	I0816 21:58:29.926436   71539 command_runner.go:124] > Platform:      linux/amd64
	I0816 21:58:29.926444   71539 command_runner.go:124] > Linkmode:      dynamic
	I0816 21:58:29.926507   71539 ssh_runner.go:149] Run: crio --version
	I0816 21:58:29.983225   71539 command_runner.go:124] > crio version 1.20.3
	I0816 21:58:29.983243   71539 command_runner.go:124] > Version:       1.20.3
	I0816 21:58:29.983250   71539 command_runner.go:124] > GitCommit:     50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d
	I0816 21:58:29.983255   71539 command_runner.go:124] > GitTreeState:  clean
	I0816 21:58:29.983261   71539 command_runner.go:124] > BuildDate:     2021-07-14T23:38:00Z
	I0816 21:58:29.983265   71539 command_runner.go:124] > GoVersion:     go1.15.2
	I0816 21:58:29.983270   71539 command_runner.go:124] > Compiler:      gc
	I0816 21:58:29.983275   71539 command_runner.go:124] > Platform:      linux/amd64
	I0816 21:58:29.983279   71539 command_runner.go:124] > Linkmode:      dynamic
	I0816 21:58:29.984300   71539 command_runner.go:124] ! time="2021-08-16T21:58:29Z" level=info msg="Starting CRI-O, version: 1.20.3, git: 50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d(clean)"
	I0816 21:58:29.986843   71539 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	I0816 21:58:29.988198   71539 out.go:177]   - env NO_PROXY=192.168.49.2
	I0816 21:58:29.988270   71539 cli_runner.go:115] Run: docker network inspect multinode-20210816215712-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 21:58:30.023753   71539 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0816 21:58:30.026979   71539 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 21:58:30.035497   71539 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487 for IP: 192.168.49.3
	I0816 21:58:30.035539   71539 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 21:58:30.035556   71539 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 21:58:30.035584   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 21:58:30.035595   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 21:58:30.035607   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 21:58:30.035617   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 21:58:30.035669   71539 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem (1338 bytes)
	W0816 21:58:30.035705   71539 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487_empty.pem, impossibly tiny 0 bytes
	I0816 21:58:30.035720   71539 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 21:58:30.035742   71539 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 21:58:30.035766   71539 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 21:58:30.035788   71539 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0816 21:58:30.035826   71539 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem (1708 bytes)
	I0816 21:58:30.035850   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem -> /usr/share/ca-certificates/6487.pem
	I0816 21:58:30.035864   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem -> /usr/share/ca-certificates/64872.pem
	I0816 21:58:30.035875   71539 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 21:58:30.036223   71539 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 21:58:30.051706   71539 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 21:58:30.066367   71539 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 21:58:30.081000   71539 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 21:58:30.095979   71539 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem --> /usr/share/ca-certificates/6487.pem (1338 bytes)
	I0816 21:58:30.110764   71539 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /usr/share/ca-certificates/64872.pem (1708 bytes)
	I0816 21:58:30.125543   71539 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 21:58:30.140540   71539 ssh_runner.go:149] Run: openssl version
	I0816 21:58:30.144596   71539 command_runner.go:124] > OpenSSL 1.1.1f  31 Mar 2020
	I0816 21:58:30.144728   71539 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6487.pem && ln -fs /usr/share/ca-certificates/6487.pem /etc/ssl/certs/6487.pem"
	I0816 21:58:30.151050   71539 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6487.pem
	I0816 21:58:30.153677   71539 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Aug 16 21:50 /usr/share/ca-certificates/6487.pem
	I0816 21:58:30.153726   71539 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:50 /usr/share/ca-certificates/6487.pem
	I0816 21:58:30.153770   71539 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6487.pem
	I0816 21:58:30.157819   71539 command_runner.go:124] > 51391683
	I0816 21:58:30.157956   71539 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6487.pem /etc/ssl/certs/51391683.0"
	I0816 21:58:30.164171   71539 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/64872.pem && ln -fs /usr/share/ca-certificates/64872.pem /etc/ssl/certs/64872.pem"
	I0816 21:58:30.170490   71539 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/64872.pem
	I0816 21:58:30.173085   71539 command_runner.go:124] > -rw-r--r-- 1 root root 1708 Aug 16 21:50 /usr/share/ca-certificates/64872.pem
	I0816 21:58:30.173151   71539 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:50 /usr/share/ca-certificates/64872.pem
	I0816 21:58:30.173185   71539 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/64872.pem
	I0816 21:58:30.177374   71539 command_runner.go:124] > 3ec20f2e
	I0816 21:58:30.177424   71539 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/64872.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 21:58:30.183691   71539 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 21:58:30.189928   71539 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 21:58:30.192679   71539 command_runner.go:124] > -rw-r--r-- 1 root root 1111 Aug 16 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0816 21:58:30.192746   71539 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0816 21:58:30.192787   71539 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 21:58:30.196817   71539 command_runner.go:124] > b5213941
	I0816 21:58:30.196997   71539 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 21:58:30.203341   71539 ssh_runner.go:149] Run: crio config
	I0816 21:58:30.259710   71539 command_runner.go:124] ! time="2021-08-16T21:58:30Z" level=info msg="Starting CRI-O, version: 1.20.3, git: 50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d(clean)"
	I0816 21:58:30.262705   71539 command_runner.go:124] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0816 21:58:30.265034   71539 command_runner.go:124] > # The CRI-O configuration file specifies all of the available configuration
	I0816 21:58:30.265055   71539 command_runner.go:124] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0816 21:58:30.265062   71539 command_runner.go:124] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0816 21:58:30.265066   71539 command_runner.go:124] > #
	I0816 21:58:30.265072   71539 command_runner.go:124] > # Please refer to crio.conf(5) for details of all configuration options.
	I0816 21:58:30.265079   71539 command_runner.go:124] > # CRI-O supports partial configuration reload during runtime, which can be
	I0816 21:58:30.265089   71539 command_runner.go:124] > # done by sending SIGHUP to the running process. Currently supported options
	I0816 21:58:30.265096   71539 command_runner.go:124] > # are explicitly mentioned with: 'This option supports live configuration
	I0816 21:58:30.265099   71539 command_runner.go:124] > # reload'.
	I0816 21:58:30.265109   71539 command_runner.go:124] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0816 21:58:30.265119   71539 command_runner.go:124] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0816 21:58:30.265128   71539 command_runner.go:124] > # you want to change the system's defaults. If you want to modify storage just
	I0816 21:58:30.265138   71539 command_runner.go:124] > # for CRI-O, you can change the storage configuration options here.
	I0816 21:58:30.265148   71539 command_runner.go:124] > [crio]
	I0816 21:58:30.265157   71539 command_runner.go:124] > # Path to the "root directory". CRI-O stores all of its data, including
	I0816 21:58:30.265165   71539 command_runner.go:124] > # containers images, in this directory.
	I0816 21:58:30.265173   71539 command_runner.go:124] > #root = "/var/lib/containers/storage"
	I0816 21:58:30.265183   71539 command_runner.go:124] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0816 21:58:30.265190   71539 command_runner.go:124] > #runroot = "/run/containers/storage"
	I0816 21:58:30.265197   71539 command_runner.go:124] > # Storage driver used to manage the storage of images and containers. Please
	I0816 21:58:30.265206   71539 command_runner.go:124] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0816 21:58:30.265213   71539 command_runner.go:124] > #storage_driver = "overlay"
	I0816 21:58:30.265220   71539 command_runner.go:124] > # List to pass options to the storage driver. Please refer to
	I0816 21:58:30.265230   71539 command_runner.go:124] > # containers-storage.conf(5) to see all available storage options.
	I0816 21:58:30.265236   71539 command_runner.go:124] > #storage_option = [
	I0816 21:58:30.265241   71539 command_runner.go:124] > #	"overlay.mountopt=nodev",
	I0816 21:58:30.265247   71539 command_runner.go:124] > #]
	I0816 21:58:30.265254   71539 command_runner.go:124] > # The default log directory where all logs will go unless directly specified by
	I0816 21:58:30.265260   71539 command_runner.go:124] > # the kubelet. The log directory specified must be an absolute directory.
	I0816 21:58:30.265267   71539 command_runner.go:124] > log_dir = "/var/log/crio/pods"
	I0816 21:58:30.265275   71539 command_runner.go:124] > # Location for CRI-O to lay down the temporary version file.
	I0816 21:58:30.265284   71539 command_runner.go:124] > # It is used to check if crio wipe should wipe containers, which should
	I0816 21:58:30.265291   71539 command_runner.go:124] > # always happen on a node reboot
	I0816 21:58:30.265296   71539 command_runner.go:124] > version_file = "/var/run/crio/version"
	I0816 21:58:30.265304   71539 command_runner.go:124] > # Location for CRI-O to lay down the persistent version file.
	I0816 21:58:30.265312   71539 command_runner.go:124] > # It is used to check if crio wipe should wipe images, which should
	I0816 21:58:30.265322   71539 command_runner.go:124] > # only happen when CRI-O has been upgraded
	I0816 21:58:30.265335   71539 command_runner.go:124] > version_file_persist = "/var/lib/crio/version"
	I0816 21:58:30.265344   71539 command_runner.go:124] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0816 21:58:30.265347   71539 command_runner.go:124] > [crio.api]
	I0816 21:58:30.265355   71539 command_runner.go:124] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0816 21:58:30.265362   71539 command_runner.go:124] > listen = "/var/run/crio/crio.sock"
	I0816 21:58:30.265368   71539 command_runner.go:124] > # IP address on which the stream server will listen.
	I0816 21:58:30.265375   71539 command_runner.go:124] > stream_address = "127.0.0.1"
	I0816 21:58:30.265382   71539 command_runner.go:124] > # The port on which the stream server will listen. If the port is set to "0", then
	I0816 21:58:30.265389   71539 command_runner.go:124] > # CRI-O will allocate a random free port number.
	I0816 21:58:30.265396   71539 command_runner.go:124] > stream_port = "0"
	I0816 21:58:30.265401   71539 command_runner.go:124] > # Enable encrypted TLS transport of the stream server.
	I0816 21:58:30.265408   71539 command_runner.go:124] > stream_enable_tls = false
	I0816 21:58:30.265414   71539 command_runner.go:124] > # Length of time until open streams terminate due to lack of activity
	I0816 21:58:30.265420   71539 command_runner.go:124] > stream_idle_timeout = ""
	I0816 21:58:30.265427   71539 command_runner.go:124] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0816 21:58:30.265436   71539 command_runner.go:124] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0816 21:58:30.265442   71539 command_runner.go:124] > # minutes.
	I0816 21:58:30.265446   71539 command_runner.go:124] > stream_tls_cert = ""
	I0816 21:58:30.265455   71539 command_runner.go:124] > # Path to the key file used to serve the encrypted stream. This file can
	I0816 21:58:30.265463   71539 command_runner.go:124] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0816 21:58:30.265469   71539 command_runner.go:124] > stream_tls_key = ""
	I0816 21:58:30.265475   71539 command_runner.go:124] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0816 21:58:30.265484   71539 command_runner.go:124] > # communication with the encrypted stream. This file can change and CRI-O will
	I0816 21:58:30.265493   71539 command_runner.go:124] > # automatically pick up the changes within 5 minutes.
	I0816 21:58:30.265500   71539 command_runner.go:124] > stream_tls_ca = ""
	I0816 21:58:30.265516   71539 command_runner.go:124] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0816 21:58:30.265522   71539 command_runner.go:124] > grpc_max_send_msg_size = 16777216
	I0816 21:58:30.265530   71539 command_runner.go:124] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0816 21:58:30.265537   71539 command_runner.go:124] > grpc_max_recv_msg_size = 16777216
	I0816 21:58:30.265543   71539 command_runner.go:124] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0816 21:58:30.265553   71539 command_runner.go:124] > # and options for how to set up and manage the OCI runtime.
	I0816 21:58:30.265560   71539 command_runner.go:124] > [crio.runtime]
	I0816 21:58:30.265566   71539 command_runner.go:124] > # A list of ulimits to be set in containers by default, specified as
	I0816 21:58:30.265574   71539 command_runner.go:124] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0816 21:58:30.265580   71539 command_runner.go:124] > # "nofile=1024:2048"
	I0816 21:58:30.265587   71539 command_runner.go:124] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0816 21:58:30.265595   71539 command_runner.go:124] > #default_ulimits = [
	I0816 21:58:30.265602   71539 command_runner.go:124] > #]
	I0816 21:58:30.265608   71539 command_runner.go:124] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0816 21:58:30.265614   71539 command_runner.go:124] > no_pivot = false
	I0816 21:58:30.265625   71539 command_runner.go:124] > # decryption_keys_path is the path where the keys required for
	I0816 21:58:30.265637   71539 command_runner.go:124] > # image decryption are stored. This option supports live configuration reload.
	I0816 21:58:30.265645   71539 command_runner.go:124] > decryption_keys_path = "/etc/crio/keys/"
	I0816 21:58:30.265651   71539 command_runner.go:124] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0816 21:58:30.265659   71539 command_runner.go:124] > # Will be searched for using $PATH if empty.
	I0816 21:58:30.265663   71539 command_runner.go:124] > conmon = ""
	I0816 21:58:30.265667   71539 command_runner.go:124] > # Cgroup setting for conmon
	I0816 21:58:30.265674   71539 command_runner.go:124] > conmon_cgroup = "system.slice"
	I0816 21:58:30.265680   71539 command_runner.go:124] > # Environment variable list for the conmon process, used for passing necessary
	I0816 21:58:30.265690   71539 command_runner.go:124] > # environment variables to conmon or the runtime.
	I0816 21:58:30.265697   71539 command_runner.go:124] > conmon_env = [
	I0816 21:58:30.265703   71539 command_runner.go:124] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0816 21:58:30.265708   71539 command_runner.go:124] > ]
	I0816 21:58:30.265714   71539 command_runner.go:124] > # Additional environment variables to set for all the
	I0816 21:58:30.265722   71539 command_runner.go:124] > # containers. These are overridden if set in the
	I0816 21:58:30.265730   71539 command_runner.go:124] > # container image spec or in the container runtime configuration.
	I0816 21:58:30.265734   71539 command_runner.go:124] > default_env = [
	I0816 21:58:30.265740   71539 command_runner.go:124] > ]
	I0816 21:58:30.265746   71539 command_runner.go:124] > # If true, SELinux will be used for pod separation on the host.
	I0816 21:58:30.265753   71539 command_runner.go:124] > selinux = false
	I0816 21:58:30.265760   71539 command_runner.go:124] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0816 21:58:30.265769   71539 command_runner.go:124] > # for the runtime. If not specified, then the internal default seccomp profile
	I0816 21:58:30.265777   71539 command_runner.go:124] > # will be used. This option supports live configuration reload.
	I0816 21:58:30.265781   71539 command_runner.go:124] > seccomp_profile = ""
	I0816 21:58:30.265788   71539 command_runner.go:124] > # Changes the meaning of an empty seccomp profile. By default
	I0816 21:58:30.265796   71539 command_runner.go:124] > # (and according to CRI spec), an empty profile means unconfined.
	I0816 21:58:30.265805   71539 command_runner.go:124] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0816 21:58:30.265812   71539 command_runner.go:124] > # which might increase security.
	I0816 21:58:30.265816   71539 command_runner.go:124] > seccomp_use_default_when_empty = false
	I0816 21:58:30.265826   71539 command_runner.go:124] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0816 21:58:30.265834   71539 command_runner.go:124] > # profile name is "crio-default". This profile only takes effect if the user
	I0816 21:58:30.265843   71539 command_runner.go:124] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0816 21:58:30.265851   71539 command_runner.go:124] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0816 21:58:30.265861   71539 command_runner.go:124] > # This option supports live configuration reload.
	I0816 21:58:30.265869   71539 command_runner.go:124] > apparmor_profile = "crio-default"
	I0816 21:58:30.265875   71539 command_runner.go:124] > # Used to change irqbalance service config file path which is used for configuring
	I0816 21:58:30.265882   71539 command_runner.go:124] > # irqbalance daemon.
	I0816 21:58:30.265887   71539 command_runner.go:124] > irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0816 21:58:30.265895   71539 command_runner.go:124] > # Cgroup management implementation used for the runtime.
	I0816 21:58:30.265899   71539 command_runner.go:124] > cgroup_manager = "systemd"
	I0816 21:58:30.265908   71539 command_runner.go:124] > # Specify whether the image pull must be performed in a separate cgroup.
	I0816 21:58:30.265914   71539 command_runner.go:124] > separate_pull_cgroup = ""
	I0816 21:58:30.265921   71539 command_runner.go:124] > # List of default capabilities for containers. If it is empty or commented out,
	I0816 21:58:30.265930   71539 command_runner.go:124] > # only the capabilities defined in the containers json file by the user/kube
	I0816 21:58:30.265937   71539 command_runner.go:124] > # will be added.
	I0816 21:58:30.265941   71539 command_runner.go:124] > default_capabilities = [
	I0816 21:58:30.265947   71539 command_runner.go:124] > 	"CHOWN",
	I0816 21:58:30.265950   71539 command_runner.go:124] > 	"DAC_OVERRIDE",
	I0816 21:58:30.265956   71539 command_runner.go:124] > 	"FSETID",
	I0816 21:58:30.265960   71539 command_runner.go:124] > 	"FOWNER",
	I0816 21:58:30.265965   71539 command_runner.go:124] > 	"SETGID",
	I0816 21:58:30.265969   71539 command_runner.go:124] > 	"SETUID",
	I0816 21:58:30.265974   71539 command_runner.go:124] > 	"SETPCAP",
	I0816 21:58:30.265978   71539 command_runner.go:124] > 	"NET_BIND_SERVICE",
	I0816 21:58:30.265984   71539 command_runner.go:124] > 	"KILL",
	I0816 21:58:30.265987   71539 command_runner.go:124] > ]
	I0816 21:58:30.265996   71539 command_runner.go:124] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0816 21:58:30.266004   71539 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0816 21:58:30.266011   71539 command_runner.go:124] > default_sysctls = [
	I0816 21:58:30.266014   71539 command_runner.go:124] > ]
	I0816 21:58:30.266019   71539 command_runner.go:124] > # List of additional devices. specified as
	I0816 21:58:30.266030   71539 command_runner.go:124] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0816 21:58:30.266040   71539 command_runner.go:124] > #If it is empty or commented out, only the devices
	I0816 21:58:30.266048   71539 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0816 21:58:30.266055   71539 command_runner.go:124] > additional_devices = [
	I0816 21:58:30.266058   71539 command_runner.go:124] > ]
	I0816 21:58:30.266064   71539 command_runner.go:124] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0816 21:58:30.266072   71539 command_runner.go:124] > # directories does not exist, then CRI-O will automatically skip them.
	I0816 21:58:30.266076   71539 command_runner.go:124] > hooks_dir = [
	I0816 21:58:30.266081   71539 command_runner.go:124] > 	"/usr/share/containers/oci/hooks.d",
	I0816 21:58:30.266086   71539 command_runner.go:124] > ]
	I0816 21:58:30.266092   71539 command_runner.go:124] > # Path to the file specifying the defaults mounts for each container. The
	I0816 21:58:30.266101   71539 command_runner.go:124] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0816 21:58:30.266108   71539 command_runner.go:124] > # its default mounts from the following two files:
	I0816 21:58:30.266111   71539 command_runner.go:124] > #
	I0816 21:58:30.266117   71539 command_runner.go:124] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0816 21:58:30.266126   71539 command_runner.go:124] > #      override file, where users can either add in their own default mounts, or
	I0816 21:58:30.266134   71539 command_runner.go:124] > #      override the default mounts shipped with the package.
	I0816 21:58:30.266140   71539 command_runner.go:124] > #
	I0816 21:58:30.266146   71539 command_runner.go:124] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0816 21:58:30.266155   71539 command_runner.go:124] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0816 21:58:30.266165   71539 command_runner.go:124] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0816 21:58:30.266173   71539 command_runner.go:124] > #      only add mounts it finds in this file.
	I0816 21:58:30.266178   71539 command_runner.go:124] > #
	I0816 21:58:30.266183   71539 command_runner.go:124] > #default_mounts_file = ""
	I0816 21:58:30.266191   71539 command_runner.go:124] > # Maximum number of processes allowed in a container.
	I0816 21:58:30.266199   71539 command_runner.go:124] > pids_limit = 1024
	I0816 21:58:30.266209   71539 command_runner.go:124] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0816 21:58:30.266217   71539 command_runner.go:124] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0816 21:58:30.266227   71539 command_runner.go:124] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0816 21:58:30.266234   71539 command_runner.go:124] > # limit is never exceeded.
	I0816 21:58:30.266241   71539 command_runner.go:124] > log_size_max = -1
	I0816 21:58:30.266293   71539 command_runner.go:124] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0816 21:58:30.266302   71539 command_runner.go:124] > log_to_journald = false
	I0816 21:58:30.266308   71539 command_runner.go:124] > # Path to directory in which container exit files are written to by conmon.
	I0816 21:58:30.266313   71539 command_runner.go:124] > container_exits_dir = "/var/run/crio/exits"
	I0816 21:58:30.266321   71539 command_runner.go:124] > # Path to directory for container attach sockets.
	I0816 21:58:30.266326   71539 command_runner.go:124] > container_attach_socket_dir = "/var/run/crio"
	I0816 21:58:30.266334   71539 command_runner.go:124] > # The prefix to use for the source of the bind mounts.
	I0816 21:58:30.266338   71539 command_runner.go:124] > bind_mount_prefix = ""
	I0816 21:58:30.266345   71539 command_runner.go:124] > # If set to true, all containers will run in read-only mode.
	I0816 21:58:30.266351   71539 command_runner.go:124] > read_only = false
	I0816 21:58:30.266357   71539 command_runner.go:124] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0816 21:58:30.266366   71539 command_runner.go:124] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0816 21:58:30.266373   71539 command_runner.go:124] > # live configuration reload.
	I0816 21:58:30.266376   71539 command_runner.go:124] > log_level = "info"
	I0816 21:58:30.266385   71539 command_runner.go:124] > # Filter the log messages by the provided regular expression.
	I0816 21:58:30.266392   71539 command_runner.go:124] > # This option supports live configuration reload.
	I0816 21:58:30.266396   71539 command_runner.go:124] > log_filter = ""
	I0816 21:58:30.266402   71539 command_runner.go:124] > # The UID mappings for the user namespace of each container. A range is
	I0816 21:58:30.266411   71539 command_runner.go:124] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0816 21:58:30.266419   71539 command_runner.go:124] > # separated by comma.
	I0816 21:58:30.266424   71539 command_runner.go:124] > uid_mappings = ""
	I0816 21:58:30.266434   71539 command_runner.go:124] > # The GID mappings for the user namespace of each container. A range is
	I0816 21:58:30.266442   71539 command_runner.go:124] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0816 21:58:30.266448   71539 command_runner.go:124] > # separated by comma.
	I0816 21:58:30.266452   71539 command_runner.go:124] > gid_mappings = ""
	I0816 21:58:30.266459   71539 command_runner.go:124] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0816 21:58:30.266467   71539 command_runner.go:124] > # regarding the proper termination of the container. The lowest possible
	I0816 21:58:30.266475   71539 command_runner.go:124] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0816 21:58:30.266479   71539 command_runner.go:124] > ctr_stop_timeout = 30
	I0816 21:58:30.266488   71539 command_runner.go:124] > # manage_ns_lifecycle determines whether we pin and remove namespaces
	I0816 21:58:30.266494   71539 command_runner.go:124] > # and manage their lifecycle.
	I0816 21:58:30.266501   71539 command_runner.go:124] > # This option is being deprecated, and will be unconditionally true in the future.
	I0816 21:58:30.266511   71539 command_runner.go:124] > manage_ns_lifecycle = true
	I0816 21:58:30.266517   71539 command_runner.go:124] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0816 21:58:30.266525   71539 command_runner.go:124] > # when a pod does not have a private PID namespace, and does not use
	I0816 21:58:30.266532   71539 command_runner.go:124] > # a kernel separating runtime (like kata).
	I0816 21:58:30.266537   71539 command_runner.go:124] > # It requires manage_ns_lifecycle to be true.
	I0816 21:58:30.266544   71539 command_runner.go:124] > drop_infra_ctr = false
	I0816 21:58:30.266551   71539 command_runner.go:124] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0816 21:58:30.266559   71539 command_runner.go:124] > # You can use linux CPU list format to specify desired CPUs.
	I0816 21:58:30.266567   71539 command_runner.go:124] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0816 21:58:30.266573   71539 command_runner.go:124] > # infra_ctr_cpuset = ""
	I0816 21:58:30.266580   71539 command_runner.go:124] > # The directory where the state of the managed namespaces gets tracked.
	I0816 21:58:30.266587   71539 command_runner.go:124] > # Only used when manage_ns_lifecycle is true.
	I0816 21:58:30.266591   71539 command_runner.go:124] > namespaces_dir = "/var/run"
	I0816 21:58:30.266603   71539 command_runner.go:124] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0816 21:58:30.266612   71539 command_runner.go:124] > pinns_path = ""
	I0816 21:58:30.266621   71539 command_runner.go:124] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0816 21:58:30.266630   71539 command_runner.go:124] > # The name is matched against the runtimes map below. If this value is changed,
	I0816 21:58:30.266638   71539 command_runner.go:124] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0816 21:58:30.266645   71539 command_runner.go:124] > default_runtime = "runc"
	I0816 21:58:30.266651   71539 command_runner.go:124] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0816 21:58:30.266660   71539 command_runner.go:124] > # The runtime to use is picked based on the runtime_handler provided by the CRI.
	I0816 21:58:30.266669   71539 command_runner.go:124] > # If no runtime_handler is provided, the runtime will be picked based on the level
	I0816 21:58:30.266678   71539 command_runner.go:124] > # of trust of the workload. Each entry in the table should follow the format:
	I0816 21:58:30.266684   71539 command_runner.go:124] > #
	I0816 21:58:30.266689   71539 command_runner.go:124] > #[crio.runtime.runtimes.runtime-handler]
	I0816 21:58:30.266696   71539 command_runner.go:124] > #  runtime_path = "/path/to/the/executable"
	I0816 21:58:30.266701   71539 command_runner.go:124] > #  runtime_type = "oci"
	I0816 21:58:30.266708   71539 command_runner.go:124] > #  runtime_root = "/path/to/the/root"
	I0816 21:58:30.266712   71539 command_runner.go:124] > #  privileged_without_host_devices = false
	I0816 21:58:30.266719   71539 command_runner.go:124] > #  allowed_annotations = []
	I0816 21:58:30.266722   71539 command_runner.go:124] > # Where:
	I0816 21:58:30.266728   71539 command_runner.go:124] > # - runtime-handler: name used to identify the runtime
	I0816 21:58:30.266736   71539 command_runner.go:124] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0816 21:58:30.266745   71539 command_runner.go:124] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0816 21:58:30.266752   71539 command_runner.go:124] > #   the runtime executable name, and the runtime executable should be placed
	I0816 21:58:30.266758   71539 command_runner.go:124] > #   in $PATH.
	I0816 21:58:30.266764   71539 command_runner.go:124] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0816 21:58:30.266773   71539 command_runner.go:124] > #   omitted, an "oci" runtime is assumed.
	I0816 21:58:30.266783   71539 command_runner.go:124] > # - runtime_root (optional, string): root directory for storage of containers
	I0816 21:58:30.266789   71539 command_runner.go:124] > #   state.
	I0816 21:58:30.266795   71539 command_runner.go:124] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0816 21:58:30.266804   71539 command_runner.go:124] > #   host devices from being passed to privileged containers.
	I0816 21:58:30.266810   71539 command_runner.go:124] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0816 21:58:30.266821   71539 command_runner.go:124] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0816 21:58:30.266829   71539 command_runner.go:124] > #   The currently recognized values are:
	I0816 21:58:30.266835   71539 command_runner.go:124] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0816 21:58:30.266844   71539 command_runner.go:124] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0816 21:58:30.266852   71539 command_runner.go:124] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0816 21:58:30.266858   71539 command_runner.go:124] > [crio.runtime.runtimes.runc]
	I0816 21:58:30.266864   71539 command_runner.go:124] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0816 21:58:30.266870   71539 command_runner.go:124] > runtime_type = "oci"
	I0816 21:58:30.266874   71539 command_runner.go:124] > runtime_root = "/run/runc"
	I0816 21:58:30.266883   71539 command_runner.go:124] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0816 21:58:30.266891   71539 command_runner.go:124] > # running containers
	I0816 21:58:30.266899   71539 command_runner.go:124] > #[crio.runtime.runtimes.crun]
	I0816 21:58:30.266905   71539 command_runner.go:124] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0816 21:58:30.266914   71539 command_runner.go:124] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0816 21:58:30.266923   71539 command_runner.go:124] > # surface and mitigating the consequences of containers breakout.
	I0816 21:58:30.266930   71539 command_runner.go:124] > # Kata Containers with the default configured VMM
	I0816 21:58:30.266935   71539 command_runner.go:124] > #[crio.runtime.runtimes.kata-runtime]
	I0816 21:58:30.266943   71539 command_runner.go:124] > # Kata Containers with the QEMU VMM
	I0816 21:58:30.266951   71539 command_runner.go:124] > #[crio.runtime.runtimes.kata-qemu]
	I0816 21:58:30.266956   71539 command_runner.go:124] > # Kata Containers with the Firecracker VMM
	I0816 21:58:30.266963   71539 command_runner.go:124] > #[crio.runtime.runtimes.kata-fc]
	I0816 21:58:30.266970   71539 command_runner.go:124] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0816 21:58:30.266975   71539 command_runner.go:124] > #
	I0816 21:58:30.266982   71539 command_runner.go:124] > # CRI-O reads its configured registries defaults from the system wide
	I0816 21:58:30.266990   71539 command_runner.go:124] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0816 21:58:30.266999   71539 command_runner.go:124] > # you want to modify just CRI-O, you can change the registries configuration in
	I0816 21:58:30.267008   71539 command_runner.go:124] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0816 21:58:30.267016   71539 command_runner.go:124] > # use the system's defaults from /etc/containers/registries.conf.
	I0816 21:58:30.267022   71539 command_runner.go:124] > [crio.image]
	I0816 21:58:30.267029   71539 command_runner.go:124] > # Default transport for pulling images from a remote container storage.
	I0816 21:58:30.267036   71539 command_runner.go:124] > default_transport = "docker://"
	I0816 21:58:30.267042   71539 command_runner.go:124] > # The path to a file containing credentials necessary for pulling images from
	I0816 21:58:30.267051   71539 command_runner.go:124] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0816 21:58:30.267056   71539 command_runner.go:124] > global_auth_file = ""
	I0816 21:58:30.267063   71539 command_runner.go:124] > # The image used to instantiate infra containers.
	I0816 21:58:30.267068   71539 command_runner.go:124] > # This option supports live configuration reload.
	I0816 21:58:30.267073   71539 command_runner.go:124] > pause_image = "k8s.gcr.io/pause:3.4.1"
	I0816 21:58:30.267082   71539 command_runner.go:124] > # The path to a file containing credentials specific for pulling the pause_image from
	I0816 21:58:30.267090   71539 command_runner.go:124] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0816 21:58:30.267097   71539 command_runner.go:124] > # This option supports live configuration reload.
	I0816 21:58:30.267101   71539 command_runner.go:124] > pause_image_auth_file = ""
	I0816 21:58:30.267110   71539 command_runner.go:124] > # The command to run to have a container stay in the paused state.
	I0816 21:58:30.267120   71539 command_runner.go:124] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0816 21:58:30.267130   71539 command_runner.go:124] > # specified in the pause image. When commented out, it will fallback to the
	I0816 21:58:30.267139   71539 command_runner.go:124] > # default: "/pause". This option supports live configuration reload.
	I0816 21:58:30.267145   71539 command_runner.go:124] > pause_command = "/pause"
	I0816 21:58:30.267151   71539 command_runner.go:124] > # Path to the file which decides what sort of policy we use when deciding
	I0816 21:58:30.267160   71539 command_runner.go:124] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0816 21:58:30.267168   71539 command_runner.go:124] > # this option be used, as the default behavior of using the system-wide default
	I0816 21:58:30.267178   71539 command_runner.go:124] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0816 21:58:30.267186   71539 command_runner.go:124] > # refer to containers-policy.json(5) for more details.
	I0816 21:58:30.267192   71539 command_runner.go:124] > signature_policy = ""
	I0816 21:58:30.267199   71539 command_runner.go:124] > # List of registries to skip TLS verification for pulling images. Please
	I0816 21:58:30.267207   71539 command_runner.go:124] > # consider configuring the registries via /etc/containers/registries.conf before
	I0816 21:58:30.267211   71539 command_runner.go:124] > # changing them here.
	I0816 21:58:30.267215   71539 command_runner.go:124] > #insecure_registries = "[]"
	I0816 21:58:30.267225   71539 command_runner.go:124] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0816 21:58:30.267232   71539 command_runner.go:124] > # ignore; the latter will ignore volumes entirely.
	I0816 21:58:30.267237   71539 command_runner.go:124] > image_volumes = "mkdir"
	I0816 21:58:30.267245   71539 command_runner.go:124] > # List of registries to be used when pulling an unqualified image (e.g.,
	I0816 21:58:30.267254   71539 command_runner.go:124] > # "alpine:latest"). By default, registries is set to "docker.io" for
	I0816 21:58:30.267263   71539 command_runner.go:124] > # compatibility reasons. Depending on your workload and usecase you may add more
	I0816 21:58:30.267270   71539 command_runner.go:124] > # registries (e.g., "quay.io", "registry.fedoraproject.org",
	I0816 21:58:30.267279   71539 command_runner.go:124] > # "registry.opensuse.org", etc.).
	I0816 21:58:30.267286   71539 command_runner.go:124] > #registries = [
	I0816 21:58:30.267289   71539 command_runner.go:124] > # ]
	I0816 21:58:30.267306   71539 command_runner.go:124] > # Temporary directory to use for storing big files
	I0816 21:58:30.267313   71539 command_runner.go:124] > big_files_temporary_dir = ""
	I0816 21:58:30.267320   71539 command_runner.go:124] > # The crio.network table containers settings pertaining to the management of
	I0816 21:58:30.267326   71539 command_runner.go:124] > # CNI plugins.
	I0816 21:58:30.267329   71539 command_runner.go:124] > [crio.network]
	I0816 21:58:30.267338   71539 command_runner.go:124] > # The default CNI network name to be selected. If not set or "", then
	I0816 21:58:30.267346   71539 command_runner.go:124] > # CRI-O will pick-up the first one found in network_dir.
	I0816 21:58:30.267353   71539 command_runner.go:124] > # cni_default_network = "kindnet"
	I0816 21:58:30.267358   71539 command_runner.go:124] > # Path to the directory where CNI configuration files are located.
	I0816 21:58:30.267365   71539 command_runner.go:124] > network_dir = "/etc/cni/net.d/"
	I0816 21:58:30.267371   71539 command_runner.go:124] > # Paths to directories where CNI plugin binaries are located.
	I0816 21:58:30.267376   71539 command_runner.go:124] > plugin_dirs = [
	I0816 21:58:30.267381   71539 command_runner.go:124] > 	"/opt/cni/bin/",
	I0816 21:58:30.267387   71539 command_runner.go:124] > ]
	I0816 21:58:30.267393   71539 command_runner.go:124] > # A necessary configuration for Prometheus based metrics retrieval
	I0816 21:58:30.267399   71539 command_runner.go:124] > [crio.metrics]
	I0816 21:58:30.267404   71539 command_runner.go:124] > # Globally enable or disable metrics support.
	I0816 21:58:30.267411   71539 command_runner.go:124] > enable_metrics = false
	I0816 21:58:30.267416   71539 command_runner.go:124] > # The port on which the metrics server will listen.
	I0816 21:58:30.267422   71539 command_runner.go:124] > metrics_port = 9090
	I0816 21:58:30.267442   71539 command_runner.go:124] > # Local socket path to bind the metrics server to
	I0816 21:58:30.267448   71539 command_runner.go:124] > metrics_socket = ""
	I0816 21:58:30.267513   71539 cni.go:93] Creating CNI manager for ""
	I0816 21:58:30.267524   71539 cni.go:154] 2 nodes found, recommending kindnet
	I0816 21:58:30.267534   71539 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0816 21:58:30.267550   71539 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.3 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20210816215712-6487 NodeName:multinode-20210816215712-6487-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.3 CgroupDriver:systemd ClientCAFile:/va
r/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 21:58:30.267657   71539 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "multinode-20210816215712-6487-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 21:58:30.267722   71539 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-20210816215712-6487-m02 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.3 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:multinode-20210816215712-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0816 21:58:30.267765   71539 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0816 21:58:30.273596   71539 command_runner.go:124] > kubeadm
	I0816 21:58:30.273615   71539 command_runner.go:124] > kubectl
	I0816 21:58:30.273626   71539 command_runner.go:124] > kubelet
	I0816 21:58:30.274120   71539 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 21:58:30.274177   71539 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0816 21:58:30.280188   71539 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (564 bytes)
	I0816 21:58:30.291193   71539 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 21:58:30.302125   71539 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0816 21:58:30.304572   71539 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 21:58:30.312530   71539 host.go:66] Checking if "multinode-20210816215712-6487" exists ...
	I0816 21:58:30.312729   71539 config.go:177] Loaded profile config "multinode-20210816215712-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 21:58:30.312737   71539 start.go:241] JoinCluster: &{Name:multinode-20210816215712-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210816215712-6487 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0816 21:58:30.312805   71539 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm token create --print-join-command --ttl=0"
	I0816 21:58:30.312840   71539 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210816215712-6487
	I0816 21:58:30.351447   71539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/multinode-20210816215712-6487/id_rsa Username:docker}
	I0816 21:58:30.496564   71539 command_runner.go:124] > kubeadm join control-plane.minikube.internal:8443 --token o5fmml.0gfutrj5zjalt0qg --discovery-token-ca-cert-hash sha256:ab46675face1967228b7500eeaa65be645c3bcc8b24635f14c9becbff4d6cff0 
	I0816 21:58:30.496631   71539 start.go:262] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}
	I0816 21:58:30.496686   71539 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token o5fmml.0gfutrj5zjalt0qg --discovery-token-ca-cert-hash sha256:ab46675face1967228b7500eeaa65be645c3bcc8b24635f14c9becbff4d6cff0 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-20210816215712-6487-m02"
	I0816 21:58:30.542345   71539 command_runner.go:124] > [preflight] Running pre-flight checks
	I0816 21:58:30.561657   71539 command_runner.go:124] > [preflight] The system verification failed. Printing the output from the verification:
	I0816 21:58:30.561681   71539 command_runner.go:124] > KERNEL_VERSION: 4.9.0-16-amd64
	I0816 21:58:30.561690   71539 command_runner.go:124] > OS: Linux
	I0816 21:58:30.561698   71539 command_runner.go:124] > CGROUPS_CPU: enabled
	I0816 21:58:30.561707   71539 command_runner.go:124] > CGROUPS_CPUACCT: enabled
	I0816 21:58:30.561715   71539 command_runner.go:124] > CGROUPS_CPUSET: enabled
	I0816 21:58:30.561729   71539 command_runner.go:124] > CGROUPS_DEVICES: enabled
	I0816 21:58:30.561736   71539 command_runner.go:124] > CGROUPS_FREEZER: enabled
	I0816 21:58:30.561744   71539 command_runner.go:124] > CGROUPS_MEMORY: enabled
	I0816 21:58:30.561763   71539 command_runner.go:124] > CGROUPS_PIDS: enabled
	I0816 21:58:30.561774   71539 command_runner.go:124] > CGROUPS_HUGETLB: missing
	I0816 21:58:30.648538   71539 command_runner.go:124] > [preflight] Reading configuration from the cluster...
	I0816 21:58:30.648570   71539 command_runner.go:124] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0816 21:58:30.671147   71539 command_runner.go:124] > [kubelet-start] WARNING: unable to stop the kubelet service momentarily: [exit status 5]
	I0816 21:58:30.672074   71539 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 21:58:30.672193   71539 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 21:58:30.672214   71539 command_runner.go:124] > [kubelet-start] Starting the kubelet
	I0816 21:58:30.727387   71539 command_runner.go:124] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0816 21:58:36.748033   71539 command_runner.go:124] > This node has joined the cluster:
	I0816 21:58:36.748057   71539 command_runner.go:124] > * Certificate signing request was sent to apiserver and a response was received.
	I0816 21:58:36.748064   71539 command_runner.go:124] > * The Kubelet was informed of the new secure connection details.
	I0816 21:58:36.748077   71539 command_runner.go:124] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0816 21:58:36.750102   71539 command_runner.go:124] ! 	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	I0816 21:58:36.750129   71539 command_runner.go:124] ! 	[WARNING SystemVerification]: missing optional cgroups: hugetlb
	I0816 21:58:36.750156   71539 command_runner.go:124] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.9.0-16-amd64\n", err: exit status 1
	I0816 21:58:36.750167   71539 command_runner.go:124] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 21:58:36.750188   71539 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token o5fmml.0gfutrj5zjalt0qg --discovery-token-ca-cert-hash sha256:ab46675face1967228b7500eeaa65be645c3bcc8b24635f14c9becbff4d6cff0 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-20210816215712-6487-m02": (6.253485804s)
	I0816 21:58:36.750228   71539 ssh_runner.go:149] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0816 21:58:36.872115   71539 command_runner.go:124] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0816 21:58:36.872180   71539 start.go:243] JoinCluster complete in 6.559441019s
	I0816 21:58:36.872197   71539 cni.go:93] Creating CNI manager for ""
	I0816 21:58:36.872207   71539 cni.go:154] 2 nodes found, recommending kindnet
	I0816 21:58:36.872256   71539 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0816 21:58:36.875252   71539 command_runner.go:124] >   File: /opt/cni/bin/portmap
	I0816 21:58:36.875279   71539 command_runner.go:124] >   Size: 2738488   	Blocks: 5352       IO Block: 4096   regular file
	I0816 21:58:36.875287   71539 command_runner.go:124] > Device: 801h/2049d	Inode: 14944926    Links: 1
	I0816 21:58:36.875294   71539 command_runner.go:124] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0816 21:58:36.875300   71539 command_runner.go:124] > Access: 2021-02-10 15:18:15.000000000 +0000
	I0816 21:58:36.875305   71539 command_runner.go:124] > Modify: 2021-02-10 15:18:15.000000000 +0000
	I0816 21:58:36.875314   71539 command_runner.go:124] > Change: 2021-08-10 20:42:17.279076582 +0000
	I0816 21:58:36.875318   71539 command_runner.go:124] >  Birth: -
	I0816 21:58:36.875356   71539 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0816 21:58:36.875366   71539 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0816 21:58:36.886508   71539 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 21:58:37.044487   71539 command_runner.go:124] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0816 21:58:37.046171   71539 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0816 21:58:37.047918   71539 command_runner.go:124] > serviceaccount/kindnet unchanged
	I0816 21:58:37.056208   71539 command_runner.go:124] > daemonset.apps/kindnet configured
	I0816 21:58:37.059310   71539 start.go:226] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}
	I0816 21:58:37.061633   71539 out.go:177] * Verifying Kubernetes components...
	I0816 21:58:37.061686   71539 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 21:58:37.071103   71539 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 21:58:37.071421   71539 kapi.go:59] client config for multinode-20210816215712-6487: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/multinode-20210816215712-6487
/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e3460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 21:58:37.072815   71539 node_ready.go:35] waiting up to 6m0s for node "multinode-20210816215712-6487-m02" to be "Ready" ...
	I0816 21:58:37.072884   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487-m02
	I0816 21:58:37.072893   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:37.072899   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:37.072903   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:37.074439   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:37.074461   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:37.074466   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:37.074469   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:37.074472   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:37.074476   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:37.074479   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:37 GMT
	I0816 21:58:37.074564   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487-m02","uid":"2c5a3c1d-f21c-45c5-8db7-e4c610549fd8","resourceVersion":"561","creationTimestamp":"2021-08-16T21:58:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0
8-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:an [truncated 5460 chars]
	I0816 21:58:37.575337   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487-m02
	I0816 21:58:37.575363   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:37.575370   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:37.575375   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:37.577107   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:37.577123   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:37.577128   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:37.577131   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:37.577137   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:37.577141   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:37.577144   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:37 GMT
	I0816 21:58:37.577230   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487-m02","uid":"2c5a3c1d-f21c-45c5-8db7-e4c610549fd8","resourceVersion":"561","creationTimestamp":"2021-08-16T21:58:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0
8-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:an [truncated 5460 chars]
	I0816 21:58:38.075865   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487-m02
	I0816 21:58:38.075886   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:38.075892   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:38.075896   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:38.077648   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:38.077671   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:38.077677   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:38.077680   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:38.077684   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:38.077689   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:38 GMT
	I0816 21:58:38.077691   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:38.077812   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487-m02","uid":"2c5a3c1d-f21c-45c5-8db7-e4c610549fd8","resourceVersion":"561","creationTimestamp":"2021-08-16T21:58:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0
8-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:an [truncated 5460 chars]
	I0816 21:58:38.575308   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487-m02
	I0816 21:58:38.575330   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:38.575335   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:38.575339   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:38.577115   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:38.577135   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:38.577141   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:38.577146   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:38.577152   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:38.577157   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:38.577161   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:38 GMT
	I0816 21:58:38.577245   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487-m02","uid":"2c5a3c1d-f21c-45c5-8db7-e4c610549fd8","resourceVersion":"561","creationTimestamp":"2021-08-16T21:58:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0
8-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:an [truncated 5460 chars]
	I0816 21:58:39.075497   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487-m02
	I0816 21:58:39.075526   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:39.075534   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:39.075540   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:39.077156   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:39.077172   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:39.077177   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:39 GMT
	I0816 21:58:39.077180   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:39.077183   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:39.077188   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:39.077193   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:39.077314   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487-m02","uid":"2c5a3c1d-f21c-45c5-8db7-e4c610549fd8","resourceVersion":"561","creationTimestamp":"2021-08-16T21:58:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0
8-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:an [truncated 5460 chars]
	I0816 21:58:39.077559   71539 node_ready.go:58] node "multinode-20210816215712-6487-m02" has status "Ready":"False"
	I0816 21:58:39.575937   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487-m02
	I0816 21:58:39.575960   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:39.575969   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:39.575973   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:39.577360   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:39.577375   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:39.577380   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:39.577384   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:39.577387   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:39.577391   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:39 GMT
	I0816 21:58:39.577394   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:39.577470   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487-m02","uid":"2c5a3c1d-f21c-45c5-8db7-e4c610549fd8","resourceVersion":"561","creationTimestamp":"2021-08-16T21:58:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0
8-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:an [truncated 5460 chars]
	I0816 21:58:40.075002   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487-m02
	I0816 21:58:40.075027   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:40.075032   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:40.075037   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:40.077013   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:40.077033   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:40.077043   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:40.077048   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:40.077051   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:40.077054   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:40.077058   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:40 GMT
	I0816 21:58:40.077135   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487-m02","uid":"2c5a3c1d-f21c-45c5-8db7-e4c610549fd8","resourceVersion":"579","creationTimestamp":"2021-08-16T21:58:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{ [truncated 5569 chars]
	I0816 21:58:40.575759   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487-m02
	I0816 21:58:40.575781   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:40.575787   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:40.575791   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:40.577371   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:40.577401   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:40.577407   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:40 GMT
	I0816 21:58:40.577411   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:40.577414   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:40.577417   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:40.577420   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:40.577532   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487-m02","uid":"2c5a3c1d-f21c-45c5-8db7-e4c610549fd8","resourceVersion":"579","creationTimestamp":"2021-08-16T21:58:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{ [truncated 5569 chars]
	I0816 21:58:41.075158   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487-m02
	I0816 21:58:41.075183   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:41.075189   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:41.075193   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:41.077379   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:41.077395   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:41.077400   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:41.077404   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:41.077407   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:41.077410   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:41.077413   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:41 GMT
	I0816 21:58:41.077518   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487-m02","uid":"2c5a3c1d-f21c-45c5-8db7-e4c610549fd8","resourceVersion":"579","creationTimestamp":"2021-08-16T21:58:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{ [truncated 5569 chars]
	I0816 21:58:41.077743   71539 node_ready.go:58] node "multinode-20210816215712-6487-m02" has status "Ready":"False"
	I0816 21:58:41.575105   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487-m02
	I0816 21:58:41.575127   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:41.575133   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:41.575137   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:41.576722   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:41.576742   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:41.576749   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:41.576754   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:41.576759   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:41.576763   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:41 GMT
	I0816 21:58:41.576768   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:41.576857   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487-m02","uid":"2c5a3c1d-f21c-45c5-8db7-e4c610549fd8","resourceVersion":"579","creationTimestamp":"2021-08-16T21:58:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{ [truncated 5569 chars]
	I0816 21:58:42.075146   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487-m02
	I0816 21:58:42.075172   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:42.075178   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:42.075188   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:42.077350   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:42.077371   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:42.077378   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:42.077382   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:42.077386   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:42.077390   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:42.077393   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:42 GMT
	I0816 21:58:42.077485   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487-m02","uid":"2c5a3c1d-f21c-45c5-8db7-e4c610549fd8","resourceVersion":"579","creationTimestamp":"2021-08-16T21:58:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{ [truncated 5569 chars]
	I0816 21:58:42.575477   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487-m02
	I0816 21:58:42.575497   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:42.575502   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:42.575506   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:42.577318   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:42.577335   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:42.577342   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:42.577347   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:42.577352   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:42.577357   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:42.577363   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:42 GMT
	I0816 21:58:42.577480   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487-m02","uid":"2c5a3c1d-f21c-45c5-8db7-e4c610549fd8","resourceVersion":"579","creationTimestamp":"2021-08-16T21:58:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{ [truncated 5569 chars]
	I0816 21:58:43.074951   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487-m02
	I0816 21:58:43.074974   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:43.074981   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:43.074987   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:43.077394   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:43.077416   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:43.077422   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:43.077426   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:43.077429   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:43.077434   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:43.077439   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:43 GMT
	I0816 21:58:43.077545   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487-m02","uid":"2c5a3c1d-f21c-45c5-8db7-e4c610549fd8","resourceVersion":"579","creationTimestamp":"2021-08-16T21:58:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{ [truncated 5569 chars]
	I0816 21:58:43.077824   71539 node_ready.go:58] node "multinode-20210816215712-6487-m02" has status "Ready":"False"
	I0816 21:58:43.575022   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487-m02
	I0816 21:58:43.575042   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:43.575050   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:43.575054   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:43.576776   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:43.576795   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:43.576800   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:43.576803   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:43.576806   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:43.576809   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:43.576812   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:43 GMT
	I0816 21:58:43.576888   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487-m02","uid":"2c5a3c1d-f21c-45c5-8db7-e4c610549fd8","resourceVersion":"579","creationTimestamp":"2021-08-16T21:58:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{ [truncated 5569 chars]
	I0816 21:58:44.075439   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487-m02
	I0816 21:58:44.075463   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:44.075468   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:44.075473   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:44.077690   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:44.077720   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:44.077727   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:44.077732   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:44 GMT
	I0816 21:58:44.077737   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:44.077742   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:44.077750   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:44.077845   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487-m02","uid":"2c5a3c1d-f21c-45c5-8db7-e4c610549fd8","resourceVersion":"579","creationTimestamp":"2021-08-16T21:58:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{ [truncated 5569 chars]
	I0816 21:58:44.575368   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487-m02
	I0816 21:58:44.575396   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:44.575403   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:44.575409   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:44.577085   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:44.577106   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:44.577113   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:44 GMT
	I0816 21:58:44.577116   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:44.577120   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:44.577123   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:44.577126   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:44.577221   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487-m02","uid":"2c5a3c1d-f21c-45c5-8db7-e4c610549fd8","resourceVersion":"579","creationTimestamp":"2021-08-16T21:58:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{ [truncated 5569 chars]
	I0816 21:58:45.075632   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487-m02
	I0816 21:58:45.075658   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:45.075668   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:45.075674   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:45.077958   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:45.077978   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:45.077984   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:45.077989   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:45.077994   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:45.077999   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:45.078004   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:45 GMT
	I0816 21:58:45.078093   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487-m02","uid":"2c5a3c1d-f21c-45c5-8db7-e4c610549fd8","resourceVersion":"579","creationTimestamp":"2021-08-16T21:58:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{ [truncated 5569 chars]
	I0816 21:58:45.078324   71539 node_ready.go:58] node "multinode-20210816215712-6487-m02" has status "Ready":"False"
	I0816 21:58:45.575831   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487-m02
	I0816 21:58:45.575854   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:45.575859   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:45.575863   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:45.577457   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:45.577472   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:45.577477   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:45.577481   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:45.577484   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:45.577487   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:45.577490   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:45 GMT
	I0816 21:58:45.577573   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487-m02","uid":"2c5a3c1d-f21c-45c5-8db7-e4c610549fd8","resourceVersion":"579","creationTimestamp":"2021-08-16T21:58:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{ [truncated 5569 chars]
	I0816 21:58:46.075138   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487-m02
	I0816 21:58:46.075163   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:46.075169   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:46.075172   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:46.077331   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:46.077352   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:46.077359   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:46.077365   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:46.077370   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:46.077374   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:46.077377   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:46 GMT
	I0816 21:58:46.077464   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487-m02","uid":"2c5a3c1d-f21c-45c5-8db7-e4c610549fd8","resourceVersion":"579","creationTimestamp":"2021-08-16T21:58:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{ [truncated 5569 chars]
	I0816 21:58:46.576048   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487-m02
	I0816 21:58:46.576073   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:46.576081   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:46.576090   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:46.578077   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:46.578098   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:46.578104   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:46.578108   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:46.578111   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:46.578114   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:46.578117   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:46 GMT
	I0816 21:58:46.578224   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487-m02","uid":"2c5a3c1d-f21c-45c5-8db7-e4c610549fd8","resourceVersion":"586","creationTimestamp":"2021-08-16T21:58:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata
":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f: [truncated 5823 chars]
	I0816 21:58:46.578453   71539 node_ready.go:49] node "multinode-20210816215712-6487-m02" has status "Ready":"True"
	I0816 21:58:46.578471   71539 node_ready.go:38] duration metric: took 9.505634364s waiting for node "multinode-20210816215712-6487-m02" to be "Ready" ...
	I0816 21:58:46.578481   71539 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 21:58:46.578542   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0816 21:58:46.578553   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:46.578559   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:46.578568   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:46.582371   71539 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0816 21:58:46.582390   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:46.582397   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:46.582402   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:46.582406   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:46.582411   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:46.582415   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:46 GMT
	I0816 21:58:46.583275   71539 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"586"},"items":[{"metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"519","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 68300 chars]
	I0816 21:58:46.586309   71539 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-h25nx" in "kube-system" namespace to be "Ready" ...
	I0816 21:58:46.586386   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-h25nx
	I0816 21:58:46.586400   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:46.586407   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:46.586414   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:46.588550   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:46.588564   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:46.588571   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:46.588575   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:46.588579   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:46 GMT
	I0816 21:58:46.588584   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:46.588588   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:46.588803   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-h25nx","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"91c50f30-030e-467f-a926-607b16ac148d","resourceVersion":"519","creationTimestamp":"2021-08-16T21:57:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ca6e691-474b-48ca-b1cf-aeabc5116474","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ca6e691-474b-48ca-b1cf-aeabc5116474\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5734 chars]
	I0816 21:58:46.589191   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:46.589210   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:46.589217   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:46.589222   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:46.590980   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:46.590995   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:46.590999   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:46.591002   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:46.591005   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:46 GMT
	I0816 21:58:46.591008   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:46.591011   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:46.591084   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:46.591285   71539 pod_ready.go:92] pod "coredns-558bd4d5db-h25nx" in "kube-system" namespace has status "Ready":"True"
	I0816 21:58:46.591296   71539 pod_ready.go:81] duration metric: took 4.963389ms waiting for pod "coredns-558bd4d5db-h25nx" in "kube-system" namespace to be "Ready" ...
	I0816 21:58:46.591303   71539 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20210816215712-6487" in "kube-system" namespace to be "Ready" ...
	I0816 21:58:46.591339   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210816215712-6487
	I0816 21:58:46.591347   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:46.591351   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:46.591355   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:46.593114   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:46.593129   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:46.593133   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:46.593137   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:46 GMT
	I0816 21:58:46.593140   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:46.593142   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:46.593145   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:46.593218   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210816215712-6487","namespace":"kube-system","uid":"6ddb85fd-8e82-415f-bdff-c0ae7b4bf5cd","resourceVersion":"381","creationTimestamp":"2021-08-16T21:57:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"8bb80161fdca904f4e120a48ecc38525","kubernetes.io/config.mirror":"8bb80161fdca904f4e120a48ecc38525","kubernetes.io/config.seen":"2021-08-16T21:57:42.759757686Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kuber
netes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash":{ [truncated 5554 chars]
	I0816 21:58:46.593442   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:46.593453   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:46.593458   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:46.593461   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:46.594748   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:46.594764   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:46.594770   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:46.594774   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:46.594779   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:46.594787   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:46.594792   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:46 GMT
	I0816 21:58:46.594867   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:46.595062   71539 pod_ready.go:92] pod "etcd-multinode-20210816215712-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 21:58:46.595077   71539 pod_ready.go:81] duration metric: took 3.768268ms waiting for pod "etcd-multinode-20210816215712-6487" in "kube-system" namespace to be "Ready" ...
	I0816 21:58:46.595089   71539 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20210816215712-6487" in "kube-system" namespace to be "Ready" ...
	I0816 21:58:46.595126   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210816215712-6487
	I0816 21:58:46.595133   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:46.595137   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:46.595143   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:46.596523   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:46.596540   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:46.596545   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:46.596549   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:46.596551   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:46.596554   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:46.596559   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:46 GMT
	I0816 21:58:46.596633   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20210816215712-6487","namespace":"kube-system","uid":"38249ebf-4ebe-4baa-ba19-dcf8adfa19dc","resourceVersion":"324","creationTimestamp":"2021-08-16T21:57:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"ea00c4e672ad786e7f4086914a3c8804","kubernetes.io/config.mirror":"ea00c4e672ad786e7f4086914a3c8804","kubernetes.io/config.seen":"2021-08-16T21:57:42.759771660Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address.en [truncated 8085 chars]
	I0816 21:58:46.596890   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:46.596903   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:46.596907   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:46.596912   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:46.598273   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:46.598285   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:46.598289   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:46.598292   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:46.598295   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:46.598299   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:46.598303   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:46 GMT
	I0816 21:58:46.598371   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:46.598556   71539 pod_ready.go:92] pod "kube-apiserver-multinode-20210816215712-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 21:58:46.598573   71539 pod_ready.go:81] duration metric: took 3.471164ms waiting for pod "kube-apiserver-multinode-20210816215712-6487" in "kube-system" namespace to be "Ready" ...
	I0816 21:58:46.598582   71539 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20210816215712-6487" in "kube-system" namespace to be "Ready" ...
	I0816 21:58:46.598620   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210816215712-6487
	I0816 21:58:46.598628   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:46.598632   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:46.598638   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:46.599972   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:46.599986   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:46.599992   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:46 GMT
	I0816 21:58:46.599997   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:46.600001   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:46.600006   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:46.600016   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:46.600131   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210816215712-6487","namespace":"kube-system","uid":"c43c4499-421c-44d1-bde9-7711292c7ab6","resourceVersion":"382","creationTimestamp":"2021-08-16T21:57:43Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3fa1b844ada1f70b8ddc6c136b566f22","kubernetes.io/config.mirror":"3fa1b844ada1f70b8ddc6c136b566f22","kubernetes.io/config.seen":"2021-08-16T21:57:42.759773088Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.
mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config.sou [truncated 7651 chars]
	I0816 21:58:46.600370   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:46.600381   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:46.600385   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:46.600389   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:46.601621   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:46.601636   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:46.601641   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:46.601647   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:46 GMT
	I0816 21:58:46.601652   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:46.601660   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:46.601665   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:46.601792   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:46.602021   71539 pod_ready.go:92] pod "kube-controller-manager-multinode-20210816215712-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 21:58:46.602032   71539 pod_ready.go:81] duration metric: took 3.442443ms waiting for pod "kube-controller-manager-multinode-20210816215712-6487" in "kube-system" namespace to be "Ready" ...
	I0816 21:58:46.602040   71539 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-22rzz" in "kube-system" namespace to be "Ready" ...
	I0816 21:58:46.776689   71539 request.go:600] Waited for 174.596855ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-22rzz
	I0816 21:58:46.776744   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-22rzz
	I0816 21:58:46.776751   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:46.776760   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:46.776765   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:46.778507   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:46.778523   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:46.778528   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:46.778532   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:46.778535   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:46.778539   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:46 GMT
	I0816 21:58:46.778543   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:46.778635   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-22rzz","generateName":"kube-proxy-","namespace":"kube-system","uid":"09fc57f9-2322-4194-a28f-9f43e4cfd094","resourceVersion":"482","creationTimestamp":"2021-08-16T21:57:54Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fd7b7440-430e-48b2-bb5a-4544d8034ddd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fd7b7440-430e-48b2-bb5a-4544d8034ddd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5752 chars]
	I0816 21:58:46.976323   71539 request.go:600] Waited for 197.343642ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:46.976381   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:46.976387   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:46.976392   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:46.976396   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:46.978358   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:46.978379   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:46.978386   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:46.978391   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:46.978396   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:46.978401   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:46 GMT
	I0816 21:58:46.978405   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:46.978525   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:46.978767   71539 pod_ready.go:92] pod "kube-proxy-22rzz" in "kube-system" namespace has status "Ready":"True"
	I0816 21:58:46.978779   71539 pod_ready.go:81] duration metric: took 376.733553ms waiting for pod "kube-proxy-22rzz" in "kube-system" namespace to be "Ready" ...
	I0816 21:58:46.978790   71539 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qdhwb" in "kube-system" namespace to be "Ready" ...
	I0816 21:58:47.177087   71539 request.go:600] Waited for 198.227853ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qdhwb
	I0816 21:58:47.177159   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qdhwb
	I0816 21:58:47.177172   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:47.177180   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:47.177191   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:47.179052   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:47.179090   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:47.179098   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:47.179104   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:47.179109   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:47.179115   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:47.179120   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:47 GMT
	I0816 21:58:47.179248   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qdhwb","generateName":"kube-proxy-","namespace":"kube-system","uid":"f8806e67-cb1e-4cb6-a943-43ab54a195ae","resourceVersion":"573","creationTimestamp":"2021-08-16T21:58:36Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fd7b7440-430e-48b2-bb5a-4544d8034ddd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fd7b7440-430e-48b2-bb5a-4544d8034ddd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5760 chars]
	I0816 21:58:47.376526   71539 request.go:600] Waited for 196.809399ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487-m02
	I0816 21:58:47.376603   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487-m02
	I0816 21:58:47.376616   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:47.376624   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:47.376635   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:47.378740   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:47.378761   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:47.378767   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:47.378772   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:47.378776   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:47.378781   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:47.378786   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:47 GMT
	I0816 21:58:47.378874   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487-m02","uid":"2c5a3c1d-f21c-45c5-8db7-e4c610549fd8","resourceVersion":"587","creationTimestamp":"2021-08-16T21:58:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:58:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata
":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f: [truncated 5762 chars]
	I0816 21:58:47.379110   71539 pod_ready.go:92] pod "kube-proxy-qdhwb" in "kube-system" namespace has status "Ready":"True"
	I0816 21:58:47.379125   71539 pod_ready.go:81] duration metric: took 400.329049ms waiting for pod "kube-proxy-qdhwb" in "kube-system" namespace to be "Ready" ...
	I0816 21:58:47.379134   71539 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20210816215712-6487" in "kube-system" namespace to be "Ready" ...
	I0816 21:58:47.576529   71539 request.go:600] Waited for 197.343853ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210816215712-6487
	I0816 21:58:47.576610   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210816215712-6487
	I0816 21:58:47.576623   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:47.576634   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:47.576644   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:47.578470   71539 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0816 21:58:47.578491   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:47.578497   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:47.578502   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:47.578507   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:47.578511   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:47 GMT
	I0816 21:58:47.578516   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:47.578676   71539 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210816215712-6487","namespace":"kube-system","uid":"401c67bf-102e-471d-853c-8f6d512b12ba","resourceVersion":"362","creationTimestamp":"2021-08-16T21:57:43Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"955eb76105b940acda068885f974ae80","kubernetes.io/config.mirror":"955eb76105b940acda068885f974ae80","kubernetes.io/config.seen":"2021-08-16T21:57:42.759774136Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kube
rnetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels [truncated 4533 chars]
	I0816 21:58:47.776342   71539 request.go:600] Waited for 197.358154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:47.776429   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210816215712-6487
	I0816 21:58:47.776445   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:47.776458   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:47.776469   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:47.778552   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:47.778569   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:47.778575   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:47.778580   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:47.778584   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:47.778588   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:47.778592   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:47 GMT
	I0816 21:58:47.778693   71539 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"k
ubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-16T21:57 [truncated 6596 chars]
	I0816 21:58:47.778944   71539 pod_ready.go:92] pod "kube-scheduler-multinode-20210816215712-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 21:58:47.778958   71539 pod_ready.go:81] duration metric: took 399.816798ms waiting for pod "kube-scheduler-multinode-20210816215712-6487" in "kube-system" namespace to be "Ready" ...
	I0816 21:58:47.778971   71539 pod_ready.go:38] duration metric: took 1.200473278s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 21:58:47.778993   71539 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 21:58:47.779040   71539 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 21:58:47.788358   71539 system_svc.go:56] duration metric: took 9.360714ms WaitForService to wait for kubelet.
	I0816 21:58:47.788375   71539 kubeadm.go:547] duration metric: took 10.72902134s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 21:58:47.788398   71539 node_conditions.go:102] verifying NodePressure condition ...
	I0816 21:58:47.976833   71539 request.go:600] Waited for 188.35964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0816 21:58:47.976884   71539 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes
	I0816 21:58:47.976892   71539 round_trippers.go:438] Request Headers:
	I0816 21:58:47.976899   71539 round_trippers.go:442]     Accept: application/json, */*
	I0816 21:58:47.976908   71539 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 21:58:47.979201   71539 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0816 21:58:47.979224   71539 round_trippers.go:460] Response Headers:
	I0816 21:58:47.979232   71539 round_trippers.go:463]     Cache-Control: no-cache, private
	I0816 21:58:47.979235   71539 round_trippers.go:463]     Content-Type: application/json
	I0816 21:58:47.979239   71539 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: aeaf0eb7-7649-4219-8772-0e4935d4673d
	I0816 21:58:47.979243   71539 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 109df20d-57fa-436a-9793-3a91aeb96cd8
	I0816 21:58:47.979248   71539 round_trippers.go:463]     Date: Mon, 16 Aug 2021 21:58:47 GMT
	I0816 21:58:47.979374   71539 request.go:1123] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"589"},"items":[{"metadata":{"name":"multinode-20210816215712-6487","uid":"f76eb40b-c41f-4216-875a-dbe59d92d1d7","resourceVersion":"402","creationTimestamp":"2021-08-16T21:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210816215712-6487","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48","minikube.k8s.io/name":"multinode-20210816215712-6487","minikube.k8s.io/updated_at":"2021_08_16T21_57_38_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-at
tach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation": [truncated 13403 chars]
	I0816 21:58:47.979874   71539 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 21:58:47.979893   71539 node_conditions.go:123] node cpu capacity is 8
	I0816 21:58:47.979933   71539 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 21:58:47.979939   71539 node_conditions.go:123] node cpu capacity is 8
	I0816 21:58:47.979945   71539 node_conditions.go:105] duration metric: took 191.542292ms to run NodePressure ...
	I0816 21:58:47.979960   71539 start.go:231] waiting for startup goroutines ...
	I0816 21:58:48.021143   71539 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0816 21:58:48.023377   71539 out.go:177] * Done! kubectl is now configured to use "multinode-20210816215712-6487" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Mon 2021-08-16 21:57:14 UTC, end at Mon 2021-08-16 21:59:15 UTC. --
	Aug 16 21:58:21 multinode-20210816215712-6487 crio[365]: time="2021-08-16 21:58:21.293274170Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,RepoTags:[k8s.gcr.io/coredns/coredns:v1.8.0],RepoDigests:[k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61 k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e],Size_:42585056,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=da7c1525-6004-4964-9d51-6c35b51666fa name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 21:58:21 multinode-20210816215712-6487 crio[365]: time="2021-08-16 21:58:21.294111889Z" level=info msg="Creating container: kube-system/coredns-558bd4d5db-h25nx/coredns" id=d53a4425-4ee9-400e-8c0c-0a5aa6c442a2 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 21:58:21 multinode-20210816215712-6487 crio[365]: time="2021-08-16 21:58:21.305842625Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/70c1f5f29a571b7fa846c4bb606365ebdfef92ff7efdd51654cece5453c15544/merged/etc/passwd: no such file or directory"
	Aug 16 21:58:21 multinode-20210816215712-6487 crio[365]: time="2021-08-16 21:58:21.305876775Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/70c1f5f29a571b7fa846c4bb606365ebdfef92ff7efdd51654cece5453c15544/merged/etc/group: no such file or directory"
	Aug 16 21:58:21 multinode-20210816215712-6487 crio[365]: time="2021-08-16 21:58:21.423116996Z" level=info msg="Created container a68efba1bebbe47074c37c87b0d7b7383552f8011cac73e53c41d7a0c760640d: kube-system/coredns-558bd4d5db-h25nx/coredns" id=d53a4425-4ee9-400e-8c0c-0a5aa6c442a2 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 21:58:21 multinode-20210816215712-6487 crio[365]: time="2021-08-16 21:58:21.423643888Z" level=info msg="Starting container: a68efba1bebbe47074c37c87b0d7b7383552f8011cac73e53c41d7a0c760640d" id=f0510d39-c95a-4cc9-86f7-c25f6e06bb99 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 21:58:21 multinode-20210816215712-6487 crio[365]: time="2021-08-16 21:58:21.433054088Z" level=info msg="Started container a68efba1bebbe47074c37c87b0d7b7383552f8011cac73e53c41d7a0c760640d: kube-system/coredns-558bd4d5db-h25nx/coredns" id=f0510d39-c95a-4cc9-86f7-c25f6e06bb99 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 21:58:49 multinode-20210816215712-6487 crio[365]: time="2021-08-16 21:58:49.247575630Z" level=info msg="Running pod sandbox: default/busybox-84b6686758-v4kzv/POD" id=0eee3602-1650-43b0-aa5a-1f501238fad4 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 16 21:58:49 multinode-20210816215712-6487 crio[365]: time="2021-08-16 21:58:49.260832672Z" level=info msg="Got pod network &{Name:busybox-84b6686758-v4kzv Namespace:default ID:ba58267427791bde059c4b324fdda1244f69dd5af4cbbd55b87ec4b52e62b97f NetNS:/var/run/netns/f7f78cb9-6e5c-459e-9881-1bb94b0e2d0e Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Aug 16 21:58:49 multinode-20210816215712-6487 crio[365]: time="2021-08-16 21:58:49.260859705Z" level=info msg="About to add CNI network kindnet (type=ptp)"
	Aug 16 21:58:49 multinode-20210816215712-6487 crio[365]: time="2021-08-16 21:58:49.353283581Z" level=info msg="Got pod network &{Name:busybox-84b6686758-v4kzv Namespace:default ID:ba58267427791bde059c4b324fdda1244f69dd5af4cbbd55b87ec4b52e62b97f NetNS:/var/run/netns/f7f78cb9-6e5c-459e-9881-1bb94b0e2d0e Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Aug 16 21:58:49 multinode-20210816215712-6487 crio[365]: time="2021-08-16 21:58:49.353444950Z" level=info msg="About to check CNI network kindnet (type=ptp)"
	Aug 16 21:58:49 multinode-20210816215712-6487 crio[365]: time="2021-08-16 21:58:49.473626867Z" level=info msg="Ran pod sandbox ba58267427791bde059c4b324fdda1244f69dd5af4cbbd55b87ec4b52e62b97f with infra container: default/busybox-84b6686758-v4kzv/POD" id=0eee3602-1650-43b0-aa5a-1f501238fad4 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 16 21:58:49 multinode-20210816215712-6487 crio[365]: time="2021-08-16 21:58:49.474629248Z" level=info msg="Checking image status: busybox:1.28" id=de7976bc-1903-4603-9653-842d60b19180 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 21:58:49 multinode-20210816215712-6487 crio[365]: time="2021-08-16 21:58:49.475059595Z" level=info msg="Image busybox:1.28 not found" id=de7976bc-1903-4603-9653-842d60b19180 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 21:58:49 multinode-20210816215712-6487 crio[365]: time="2021-08-16 21:58:49.475626476Z" level=info msg="Pulling image: busybox:1.28" id=8067cdd5-1b62-4560-9795-07356e896644 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 16 21:58:49 multinode-20210816215712-6487 crio[365]: time="2021-08-16 21:58:49.482401671Z" level=info msg="Trying to access \"docker.io/library/busybox:1.28\""
	Aug 16 21:58:49 multinode-20210816215712-6487 crio[365]: time="2021-08-16 21:58:49.639287991Z" level=info msg="Trying to access \"docker.io/library/busybox:1.28\""
	Aug 16 21:58:50 multinode-20210816215712-6487 crio[365]: time="2021-08-16 21:58:50.149973881Z" level=info msg="Pulled image: docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47" id=8067cdd5-1b62-4560-9795-07356e896644 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 16 21:58:50 multinode-20210816215712-6487 crio[365]: time="2021-08-16 21:58:50.150694309Z" level=info msg="Checking image status: busybox:1.28" id=e2cb2c92-d3c8-4aa2-b8f4-334e7a771ac4 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 21:58:50 multinode-20210816215712-6487 crio[365]: time="2021-08-16 21:58:50.151318170Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[docker.io/library/busybox:1.28],RepoDigests:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335],Size_:1365634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=e2cb2c92-d3c8-4aa2-b8f4-334e7a771ac4 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 21:58:50 multinode-20210816215712-6487 crio[365]: time="2021-08-16 21:58:50.152063921Z" level=info msg="Creating container: default/busybox-84b6686758-v4kzv/busybox" id=040c21b6-aedd-40e2-9e08-f46f2ef157d5 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 21:58:50 multinode-20210816215712-6487 crio[365]: time="2021-08-16 21:58:50.303510916Z" level=info msg="Created container ddcd382617d0ce0518f8305ed543e187e5a1981ad34e5a4bf0bf02551fee52b6: default/busybox-84b6686758-v4kzv/busybox" id=040c21b6-aedd-40e2-9e08-f46f2ef157d5 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 21:58:50 multinode-20210816215712-6487 crio[365]: time="2021-08-16 21:58:50.303998163Z" level=info msg="Starting container: ddcd382617d0ce0518f8305ed543e187e5a1981ad34e5a4bf0bf02551fee52b6" id=a3363ae8-9124-45a8-b83a-c59edea527c5 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 21:58:50 multinode-20210816215712-6487 crio[365]: time="2021-08-16 21:58:50.313317828Z" level=info msg="Started container ddcd382617d0ce0518f8305ed543e187e5a1981ad34e5a4bf0bf02551fee52b6: default/busybox-84b6686758-v4kzv/busybox" id=a3363ae8-9124-45a8-b83a-c59edea527c5 name=/runtime.v1alpha2.RuntimeService/StartContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                               CREATED              STATE               NAME                      ATTEMPT             POD ID
	ddcd382617d0c       docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47   25 seconds ago       Running             busybox                   0                   ba58267427791
	a68efba1bebbe       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899                                    54 seconds ago       Running             coredns                   0                   727422f5d559e
	b8b8354bc6807       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                    About a minute ago   Running             storage-provisioner       0                   d4442bffe616d
	5aa39ddd11817       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb                                    About a minute ago   Running             kindnet-cni               0                   236aa747a8825
	4a907c47a2481       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92                                    About a minute ago   Running             kube-proxy                0                   512867cd286a4
	6a802e2c04c2e       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a                                    About a minute ago   Running             kube-scheduler            0                   fbe8bfc19c48f
	b53c70f43763d       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9                                    About a minute ago   Running             kube-controller-manager   0                   085584f199d45
	a45921d62d6d1       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934                                    About a minute ago   Running             etcd                      0                   180860f6c3ae8
	2028867588d06       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80                                    About a minute ago   Running             kube-apiserver            0                   61af3c68a0a08
	
	* 
	* ==> coredns [a68efba1bebbe47074c37c87b0d7b7383552f8011cac73e53c41d7a0c760640d] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-20210816215712-6487
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20210816215712-6487
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
	                    minikube.k8s.io/name=multinode-20210816215712-6487
	                    minikube.k8s.io/updated_at=2021_08_16T21_57_38_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Aug 2021 21:57:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20210816215712-6487
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Aug 2021 21:59:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Aug 2021 21:59:14 +0000   Mon, 16 Aug 2021 21:57:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Aug 2021 21:59:14 +0000   Mon, 16 Aug 2021 21:57:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Aug 2021 21:59:14 +0000   Mon, 16 Aug 2021 21:57:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Aug 2021 21:59:14 +0000   Mon, 16 Aug 2021 21:57:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    multinode-20210816215712-6487
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                6e266510-79b0-4fd9-b7ac-292f9d5c616e
	  Boot ID:                    fb7b5690-fedc-46af-96ea-1f6e59faa09d
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-84b6686758-v4kzv                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kube-system                 coredns-558bd4d5db-h25nx                                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     81s
	  kube-system                 etcd-multinode-20210816215712-6487                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         93s
	  kube-system                 kindnet-qtjn7                                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      82s
	  kube-system                 kube-apiserver-multinode-20210816215712-6487             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-controller-manager-multinode-20210816215712-6487    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-proxy-22rzz                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-scheduler-multinode-20210816215712-6487             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 storage-provisioner                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 107s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  107s (x4 over 107s)  kubelet     Node multinode-20210816215712-6487 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s (x4 over 107s)  kubelet     Node multinode-20210816215712-6487 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s (x4 over 107s)  kubelet     Node multinode-20210816215712-6487 status is now: NodeHasSufficientPID
	  Normal  Starting                 93s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  93s                  kubelet     Node multinode-20210816215712-6487 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    93s                  kubelet     Node multinode-20210816215712-6487 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     93s                  kubelet     Node multinode-20210816215712-6487 status is now: NodeHasSufficientPID
	  Normal  NodeReady                83s                  kubelet     Node multinode-20210816215712-6487 status is now: NodeReady
	  Normal  Starting                 80s                  kube-proxy  Starting kube-proxy.
	
	
	Name:               multinode-20210816215712-6487-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20210816215712-6487-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Aug 2021 21:58:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20210816215712-6487-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Aug 2021 21:59:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Aug 2021 21:59:06 +0000   Mon, 16 Aug 2021 21:58:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Aug 2021 21:59:06 +0000   Mon, 16 Aug 2021 21:58:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Aug 2021 21:59:06 +0000   Mon, 16 Aug 2021 21:58:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Aug 2021 21:59:06 +0000   Mon, 16 Aug 2021 21:58:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    multinode-20210816215712-6487-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                1f957ab2-b925-440e-9bbe-3bd9d7eaf167
	  Boot ID:                    fb7b5690-fedc-46af-96ea-1f6e59faa09d
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-84b6686758-lw52x    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kube-system                 kindnet-mn52n               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      40s
	  kube-system                 kube-proxy-qdhwb            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 40s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s (x2 over 40s)  kubelet     Node multinode-20210816215712-6487-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s (x2 over 40s)  kubelet     Node multinode-20210816215712-6487-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s (x2 over 40s)  kubelet     Node multinode-20210816215712-6487-m02 status is now: NodeHasSufficientPID
	  Normal  Starting                 37s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                30s                kubelet     Node multinode-20210816215712-6487-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Aug16 21:55] IPv4: martian source 10.85.0.3 from 10.85.0.3, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff e6 1d 00 25 80 e9 08 06        .........%!.(MISSING)..
	[ +12.958216] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth190f5766
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 36 63 52 b8 2a f7 08 06        ......6cR.*...
	[ +28.765659] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug16 21:56] cgroup: cgroup2: unknown option "nsdelegate"
	[ +25.335856] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug16 21:57] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug16 21:58] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1e 4b 0c 10 ac c4 08 06        .......K......
	[  +0.000006] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000000] ll header: 00000000: ff ff ff ff ff ff 1e 4b 0c 10 ac c4 08 06        .......K......
	[  +0.270213] IPv4: martian source 10.85.0.3 from 10.85.0.3, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 7e 3c 32 63 c9 64 08 06        ......~<2c.d..
	[ +14.702375] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth02ea3e4d
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 3e d0 a5 96 20 37 08 06        ......>... 7..
	[  +4.082604] cgroup: cgroup2: unknown option "nsdelegate"
	[ +24.083403] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth39e5d0d4
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff e6 db 4d d3 a2 d4 08 06        ........M.....
	[ +10.081124] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff ca b8 98 9e 4e 2a 08 06        ..........N*..
	[  +0.000003] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff ca b8 98 9e 4e 2a 08 06        ..........N*..
	[Aug16 21:59] IPv4: martian source 10.244.1.2 from 10.244.1.2, on dev veth77d256c3
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff f6 89 b1 6a 57 a0 08 06        .........jW...
	
	* 
	* ==> etcd [a45921d62d6d1ebe48333533aad9dced66f313cc7bc9558e5a28b7994726361a] <==
	* 2021-08-16 21:57:54.703094 W | etcdserver: request "header:<ID:8128007015189211826 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-20210816215712-6487.169be844fd9cc725\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-20210816215712-6487.169be844fd9cc725\" value_size:619 lease:8128007015189211463 >> failure:<>>" with result "size:16" took too long (169.735323ms) to execute
	2021-08-16 21:57:54.703242 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (299.370087ms) to execute
	2021-08-16 21:57:54.703359 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/namespace-controller\" " with result "range_response_count:1 size:207" took too long (297.57868ms) to execute
	2021-08-16 21:58:00.838686 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:58:10.838759 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:58:20.838222 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:58:30.838203 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:58:40.838082 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:58:50.838838 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:58:58.222126 W | wal: sync duration of 1.982050516s, expected less than 1s
	2021-08-16 21:58:58.222639 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.663411363s) to execute
	2021-08-16 21:58:59.192255 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1127" took too long (1.275379294s) to execute
	2021-08-16 21:58:59.192297 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:420" took too long (968.317492ms) to execute
	2021-08-16 21:58:59.192312 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:2 size:11462" took too long (2.040160289s) to execute
	2021-08-16 21:58:59.192360 W | etcdserver: read-only range request "key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true " with result "range_response_count:0 size:5" took too long (491.636862ms) to execute
	2021-08-16 21:58:59.192482 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (878.423126ms) to execute
	2021-08-16 21:59:00.253064 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (694.00671ms) to execute
	2021-08-16 21:59:00.253089 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (763.794758ms) to execute
	2021-08-16 21:59:00.253130 W | etcdserver: read-only range request "key:\"/registry/replicasets/\" range_end:\"/registry/replicasets0\" count_only:true " with result "range_response_count:0 size:7" took too long (919.658183ms) to execute
	2021-08-16 21:59:00.253144 W | etcdserver: read-only range request "key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true " with result "range_response_count:0 size:7" took too long (775.173918ms) to execute
	2021-08-16 21:59:00.838200 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 21:59:01.472113 W | etcdserver: read-only range request "key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true " with result "range_response_count:0 size:5" took too long (461.964026ms) to execute
	2021-08-16 21:59:01.472164 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (912.494952ms) to execute
	2021-08-16 21:59:01.472657 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1127" took too long (266.293847ms) to execute
	2021-08-16 21:59:10.838946 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  21:59:16 up 38 min,  0 users,  load average: 1.34, 1.20, 0.78
	Linux multinode-20210816215712-6487 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [2028867588d06f67474159b36c46071c2a70760b5b4184e6d981c0c60ca4c0ea] <==
	* I0816 21:58:13.641106       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 21:58:13.641115       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0816 21:58:54.279621       1 client.go:360] parsed scheme: "passthrough"
	I0816 21:58:54.279663       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 21:58:54.279671       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0816 21:58:58.223005       1 trace.go:205] Trace[833245481]: "GuaranteedUpdate etcd3" type:*coordination.Lease (16-Aug-2021 21:58:56.738) (total time: 1484ms):
	Trace[833245481]: ---"Transaction committed" 1483ms (21:58:00.222)
	Trace[833245481]: [1.484375518s] [1.484375518s] END
	I0816 21:58:58.223013       1 trace.go:205] Trace[1350324049]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (16-Aug-2021 21:58:56.238) (total time: 1984ms):
	Trace[1350324049]: ---"Transaction committed" 1982ms (21:58:00.222)
	Trace[1350324049]: [1.984167115s] [1.984167115s] END
	I0816 21:58:58.223175       1 trace.go:205] Trace[2031155017]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-20210816215712-6487-m02,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.49.3,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (16-Aug-2021 21:58:56.738) (total time: 1484ms):
	Trace[2031155017]: ---"Object stored in database" 1484ms (21:58:00.223)
	Trace[2031155017]: [1.484713441s] [1.484713441s] END
	I0816 21:58:59.192926       1 trace.go:205] Trace[1496042084]: "Get" url:/api/v1/namespaces/default/endpoints/kubernetes,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (16-Aug-2021 21:58:58.223) (total time: 969ms):
	Trace[1496042084]: ---"About to write a response" 969ms (21:58:00.192)
	Trace[1496042084]: [969.327108ms] [969.327108ms] END
	I0816 21:58:59.193040       1 trace.go:205] Trace[1350233839]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (16-Aug-2021 21:58:57.151) (total time: 2041ms):
	Trace[1350233839]: [2.04135391s] [2.04135391s] END
	I0816 21:58:59.193053       1 trace.go:205] Trace[230600924]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (16-Aug-2021 21:58:57.916) (total time: 1276ms):
	Trace[230600924]: ---"About to write a response" 1276ms (21:58:00.192)
	Trace[230600924]: [1.276647971s] [1.276647971s] END
	I0816 21:58:59.193514       1 trace.go:205] Trace[2071876872]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (16-Aug-2021 21:58:57.151) (total time: 2041ms):
	Trace[2071876872]: ---"Listing from storage done" 2041ms (21:58:00.193)
	Trace[2071876872]: [2.041845567s] [2.041845567s] END
	
	* 
	* ==> kube-controller-manager [b53c70f43763dddfb191b428c53cdc2e3a1716b01970e3eebc31ab4f4a8e88d2] <==
	* I0816 21:57:55.013589       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0816 21:57:55.018219       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-chcnk"
	E0816 21:57:55.018306       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"bf775630-7b9b-46e5-b089-81a6488c5009", ResourceVersion:"300", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764747858, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists
\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00027b068), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00027b080)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0014fea60), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ki
ndnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00027b098), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FC
VolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00027b0b0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolume
Source)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00027b0c8), EmptyDir:(*v1.Emp
tyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVo
lume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0014fea80)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0014feac0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDe
cAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Li
fecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00153e4e0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000943df8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0007dcc40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.Hos
tAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0012a55a0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000943e50)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0816 21:57:55.023499       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-h25nx"
	E0816 21:57:55.023929       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"fd7b7440-430e-48b2-bb5a-4544d8034ddd", ResourceVersion:"281", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764747857, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00027aff0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00027b008)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0xc0014fe9a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0014d0dc0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00027b
020), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00027b038), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.21.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0014fe9e0)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00153e480), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000943b48), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0007dcbd0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0012a5550)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000943b98)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0816 21:57:55.028310       1 shared_informer.go:247] Caches are synced for resource quota 
	I0816 21:57:55.112100       1 shared_informer.go:247] Caches are synced for HPA 
	I0816 21:57:55.124528       1 shared_informer.go:247] Caches are synced for resource quota 
	I0816 21:57:55.157054       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0816 21:57:55.161849       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-chcnk"
	I0816 21:57:55.481237       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0816 21:57:55.481258       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0816 21:57:55.545268       1 shared_informer.go:247] Caches are synced for garbage collector 
	W0816 21:58:36.545451       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20210816215712-6487-m02" does not exist
	I0816 21:58:36.553490       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-qdhwb"
	I0816 21:58:36.556159       1 range_allocator.go:373] Set node multinode-20210816215712-6487-m02 PodCIDR to [10.244.1.0/24]
	I0816 21:58:36.556220       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mn52n"
	W0816 21:58:39.835013       1 node_lifecycle_controller.go:1013] Missing timestamp for Node multinode-20210816215712-6487-m02. Assuming now as a timestamp.
	I0816 21:58:39.835063       1 event.go:291] "Event occurred" object="multinode-20210816215712-6487-m02" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20210816215712-6487-m02 event: Registered Node multinode-20210816215712-6487-m02 in Controller"
	I0816 21:58:48.933337       1 event.go:291] "Event occurred" object="default/busybox" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-84b6686758 to 2"
	I0816 21:58:48.938048       1 event.go:291] "Event occurred" object="default/busybox-84b6686758" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-84b6686758-lw52x"
	I0816 21:58:48.941219       1 event.go:291] "Event occurred" object="default/busybox-84b6686758" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-84b6686758-v4kzv"
	I0816 21:58:49.844226       1 event.go:291] "Event occurred" object="default/busybox-84b6686758-lw52x" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-84b6686758-lw52x"
	
	* 
	* ==> kube-proxy [4a907c47a24817e1b9cf2e81b5c7862d442aac516778e3decf2ad2a4d48c6aee] <==
	* I0816 21:57:56.145617       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0816 21:57:56.145716       1 server_others.go:140] Detected node IP 192.168.49.2
	W0816 21:57:56.145743       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0816 21:57:56.328653       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0816 21:57:56.328695       1 server_others.go:212] Using iptables Proxier.
	I0816 21:57:56.328709       1 server_others.go:219] creating dualStackProxier for iptables.
	W0816 21:57:56.328724       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0816 21:57:56.329119       1 server.go:643] Version: v1.21.3
	I0816 21:57:56.330249       1 config.go:315] Starting service config controller
	I0816 21:57:56.330270       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 21:57:56.330298       1 config.go:224] Starting endpoint slice config controller
	I0816 21:57:56.330302       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0816 21:57:56.333512       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0816 21:57:56.335272       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 21:57:56.430921       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0816 21:57:56.430940       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [6a802e2c04c2e3e7261fe7c30bce16504e1bc4fc26264bf2985980f152652135] <==
	* W0816 21:57:34.824252       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 21:57:34.836738       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0816 21:57:34.836886       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 21:57:34.836934       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 21:57:34.836975       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0816 21:57:34.840405       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 21:57:34.840574       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 21:57:34.840595       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 21:57:34.840831       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 21:57:34.841023       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 21:57:34.841261       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 21:57:34.841758       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 21:57:34.841785       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 21:57:34.841784       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 21:57:34.842317       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 21:57:34.842700       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 21:57:34.842859       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 21:57:34.842997       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 21:57:34.913376       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 21:57:35.674159       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 21:57:35.738197       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 21:57:35.755056       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 21:57:35.794044       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 21:57:35.948295       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0816 21:57:36.237568       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 21:57:14 UTC, end at Mon 2021-08-16 21:59:16 UTC. --
	Aug 16 21:57:56 multinode-20210816215712-6487 kubelet[1601]: I0816 21:57:56.834593    1601 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6fdeb906-34a1-4a95-9a66-4d7ec70d33c9-tmp\") pod \"storage-provisioner\" (UID: \"6fdeb906-34a1-4a95-9a66-4d7ec70d33c9\") "
	Aug 16 21:57:57 multinode-20210816215712-6487 kubelet[1601]: I0816 21:57:57.237317    1601 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ssq6\" (UniqueName: \"kubernetes.io/projected/97987b37-d278-4eb4-8573-eac407b1d4f2-kube-api-access-2ssq6\") pod \"97987b37-d278-4eb4-8573-eac407b1d4f2\" (UID: \"97987b37-d278-4eb4-8573-eac407b1d4f2\") "
	Aug 16 21:57:57 multinode-20210816215712-6487 kubelet[1601]: I0816 21:57:57.237374    1601 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97987b37-d278-4eb4-8573-eac407b1d4f2-config-volume\") pod \"97987b37-d278-4eb4-8573-eac407b1d4f2\" (UID: \"97987b37-d278-4eb4-8573-eac407b1d4f2\") "
	Aug 16 21:57:57 multinode-20210816215712-6487 kubelet[1601]: W0816 21:57:57.237679    1601 empty_dir.go:520] Warning: Failed to clear quota on /var/lib/kubelet/pods/97987b37-d278-4eb4-8573-eac407b1d4f2/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Aug 16 21:57:57 multinode-20210816215712-6487 kubelet[1601]: I0816 21:57:57.237831    1601 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97987b37-d278-4eb4-8573-eac407b1d4f2-config-volume" (OuterVolumeSpecName: "config-volume") pod "97987b37-d278-4eb4-8573-eac407b1d4f2" (UID: "97987b37-d278-4eb4-8573-eac407b1d4f2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 16 21:57:57 multinode-20210816215712-6487 kubelet[1601]: I0816 21:57:57.260242    1601 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97987b37-d278-4eb4-8573-eac407b1d4f2-kube-api-access-2ssq6" (OuterVolumeSpecName: "kube-api-access-2ssq6") pod "97987b37-d278-4eb4-8573-eac407b1d4f2" (UID: "97987b37-d278-4eb4-8573-eac407b1d4f2"). InnerVolumeSpecName "kube-api-access-2ssq6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 16 21:57:57 multinode-20210816215712-6487 kubelet[1601]: I0816 21:57:57.338431    1601 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97987b37-d278-4eb4-8573-eac407b1d4f2-config-volume\") on node \"multinode-20210816215712-6487\" DevicePath \"\""
	Aug 16 21:57:57 multinode-20210816215712-6487 kubelet[1601]: I0816 21:57:57.338471    1601 reconciler.go:319] "Volume detached for volume \"kube-api-access-2ssq6\" (UniqueName: \"kubernetes.io/projected/97987b37-d278-4eb4-8573-eac407b1d4f2-kube-api-access-2ssq6\") on node \"multinode-20210816215712-6487\" DevicePath \"\""
	Aug 16 21:58:03 multinode-20210816215712-6487 kubelet[1601]: E0816 21:58:03.300279    1601 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/1f5cc52eda7fb88b119f22928af5c6f8d89c12e3db9356cd5a372832f7528cc3/docker/1f5cc52eda7fb88b119f22928af5c6f8d89c12e3db9356cd5a372832f7528cc3\": RecentStats: unable to find data in memory cache]"
	Aug 16 21:58:06 multinode-20210816215712-6487 kubelet[1601]: E0816 21:58:06.364333    1601 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-chcnk_kube-system_97987b37-d278-4eb4-8573-eac407b1d4f2_0(02e5bbde686a64e9dcf62bc83631ce652f93511c06924861174a5f5279a8716e): failed to set bridge addr: could not add IP address to \"cni0\": permission denied"
	Aug 16 21:58:06 multinode-20210816215712-6487 kubelet[1601]: E0816 21:58:06.364414    1601 kuberuntime_sandbox.go:68] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-chcnk_kube-system_97987b37-d278-4eb4-8573-eac407b1d4f2_0(02e5bbde686a64e9dcf62bc83631ce652f93511c06924861174a5f5279a8716e): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-chcnk"
	Aug 16 21:58:06 multinode-20210816215712-6487 kubelet[1601]: E0816 21:58:06.632132    1601 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-h25nx_kube-system_91c50f30-030e-467f-a926-607b16ac148d_0(6a0d4805e52c82ad351e8f6dee7bb1b73d07221dfeb20a8e975a059ca5961512): failed to set bridge addr: could not add IP address to \"cni0\": permission denied"
	Aug 16 21:58:06 multinode-20210816215712-6487 kubelet[1601]: E0816 21:58:06.632199    1601 kuberuntime_sandbox.go:68] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-h25nx_kube-system_91c50f30-030e-467f-a926-607b16ac148d_0(6a0d4805e52c82ad351e8f6dee7bb1b73d07221dfeb20a8e975a059ca5961512): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-h25nx"
	Aug 16 21:58:06 multinode-20210816215712-6487 kubelet[1601]: E0816 21:58:06.632221    1601 kuberuntime_manager.go:790] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-h25nx_kube-system_91c50f30-030e-467f-a926-607b16ac148d_0(6a0d4805e52c82ad351e8f6dee7bb1b73d07221dfeb20a8e975a059ca5961512): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-h25nx"
	Aug 16 21:58:06 multinode-20210816215712-6487 kubelet[1601]: E0816 21:58:06.632290    1601 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-558bd4d5db-h25nx_kube-system(91c50f30-030e-467f-a926-607b16ac148d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-558bd4d5db-h25nx_kube-system(91c50f30-030e-467f-a926-607b16ac148d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-h25nx_kube-system_91c50f30-030e-467f-a926-607b16ac148d_0(6a0d4805e52c82ad351e8f6dee7bb1b73d07221dfeb20a8e975a059ca5961512): failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied\"" pod="kube-system/coredns-558bd4d5db-h25nx" podUID=91c50f30-030e-467f-a926-607b16ac148d
	Aug 16 21:58:13 multinode-20210816215712-6487 kubelet[1601]: E0816 21:58:13.354314    1601 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/1f5cc52eda7fb88b119f22928af5c6f8d89c12e3db9356cd5a372832f7528cc3/docker/1f5cc52eda7fb88b119f22928af5c6f8d89c12e3db9356cd5a372832f7528cc3\": RecentStats: unable to find data in memory cache]"
	Aug 16 21:58:23 multinode-20210816215712-6487 kubelet[1601]: E0816 21:58:23.407724    1601 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/1f5cc52eda7fb88b119f22928af5c6f8d89c12e3db9356cd5a372832f7528cc3/docker/1f5cc52eda7fb88b119f22928af5c6f8d89c12e3db9356cd5a372832f7528cc3\": RecentStats: unable to find data in memory cache]"
	Aug 16 21:58:33 multinode-20210816215712-6487 kubelet[1601]: E0816 21:58:33.468030    1601 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/1f5cc52eda7fb88b119f22928af5c6f8d89c12e3db9356cd5a372832f7528cc3/docker/1f5cc52eda7fb88b119f22928af5c6f8d89c12e3db9356cd5a372832f7528cc3\": RecentStats: unable to find data in memory cache]"
	Aug 16 21:58:43 multinode-20210816215712-6487 kubelet[1601]: E0816 21:58:43.527653    1601 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/1f5cc52eda7fb88b119f22928af5c6f8d89c12e3db9356cd5a372832f7528cc3/docker/1f5cc52eda7fb88b119f22928af5c6f8d89c12e3db9356cd5a372832f7528cc3\": RecentStats: unable to find data in memory cache]"
	Aug 16 21:58:48 multinode-20210816215712-6487 kubelet[1601]: W0816 21:58:48.680944    1601 container.go:586] Failed to update stats for container "/docker/1f5cc52eda7fb88b119f22928af5c6f8d89c12e3db9356cd5a372832f7528cc3/docker/1f5cc52eda7fb88b119f22928af5c6f8d89c12e3db9356cd5a372832f7528cc3": /sys/fs/cgroup/cpuset/docker/1f5cc52eda7fb88b119f22928af5c6f8d89c12e3db9356cd5a372832f7528cc3/docker/1f5cc52eda7fb88b119f22928af5c6f8d89c12e3db9356cd5a372832f7528cc3/cpuset.cpus found to be empty, continuing to push stats
	Aug 16 21:58:48 multinode-20210816215712-6487 kubelet[1601]: I0816 21:58:48.945910    1601 topology_manager.go:187] "Topology Admit Handler"
	Aug 16 21:58:49 multinode-20210816215712-6487 kubelet[1601]: I0816 21:58:49.130970    1601 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggtxq\" (UniqueName: \"kubernetes.io/projected/328b8a9e-07f5-4c57-b4aa-9835aa28c56d-kube-api-access-ggtxq\") pod \"busybox-84b6686758-v4kzv\" (UID: \"328b8a9e-07f5-4c57-b4aa-9835aa28c56d\") "
	Aug 16 21:58:53 multinode-20210816215712-6487 kubelet[1601]: E0816 21:58:53.589361    1601 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/1f5cc52eda7fb88b119f22928af5c6f8d89c12e3db9356cd5a372832f7528cc3/docker/1f5cc52eda7fb88b119f22928af5c6f8d89c12e3db9356cd5a372832f7528cc3\": RecentStats: unable to find data in memory cache]"
	Aug 16 21:59:03 multinode-20210816215712-6487 kubelet[1601]: E0816 21:59:03.654881    1601 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/1f5cc52eda7fb88b119f22928af5c6f8d89c12e3db9356cd5a372832f7528cc3/docker/1f5cc52eda7fb88b119f22928af5c6f8d89c12e3db9356cd5a372832f7528cc3\": RecentStats: unable to find data in memory cache]"
	Aug 16 21:59:13 multinode-20210816215712-6487 kubelet[1601]: E0816 21:59:13.725362    1601 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/1f5cc52eda7fb88b119f22928af5c6f8d89c12e3db9356cd5a372832f7528cc3/docker/1f5cc52eda7fb88b119f22928af5c6f8d89c12e3db9356cd5a372832f7528cc3\": RecentStats: unable to find data in memory cache]"
	
	* 
	* ==> storage-provisioner [b8b8354bc6807185f40d604bf2d04de84c90048bdd57a6313be9292e8470e326] <==
	* I0816 21:57:57.612615       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 21:57:57.621842       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 21:57:57.621898       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 21:57:57.630108       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 21:57:57.630235       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20210816215712-6487_819989d4-9fd8-4ca3-a1ad-2415fa6da1c0!
	I0816 21:57:57.630227       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc64bc4a-819d-458b-8eb4-11ffb8df6ba4", APIVersion:"v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20210816215712-6487_819989d4-9fd8-4ca3-a1ad-2415fa6da1c0 became leader
	I0816 21:57:57.730620       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20210816215712-6487_819989d4-9fd8-4ca3-a1ad-2415fa6da1c0!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-20210816215712-6487 -n multinode-20210816215712-6487
helpers_test.go:262: (dbg) Run:  kubectl --context multinode-20210816215712-6487 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context multinode-20210816215712-6487 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context multinode-20210816215712-6487 describe pod : exit status 1 (45.250722ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context multinode-20210816215712-6487 describe pod : exit status 1
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (2.96s)

                                                
                                    
x
+
TestPreload (186.41s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210816220706-6487 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.17.0
E0816 22:08:20.659859    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816215050-6487/client.crt: no such file or directory
preload_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210816220706-6487 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.17.0: (1m53.883279264s)
preload_test.go:61: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210816220706-6487 -- sudo crictl pull busybox
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210816220706-6487 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio --kubernetes-version=v1.17.3
E0816 22:09:11.452028    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210816220706-6487 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio --kubernetes-version=v1.17.3: (1m6.482597431s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210816220706-6487 -- sudo crictl image ls
preload_test.go:85: Expected to find busybox in output of `docker images`, instead got 
-- stdout --
	IMAGE               TAG                 IMAGE ID            SIZE

                                                
                                                
-- /stdout --
panic.go:613: *** TestPreload FAILED at 2021-08-16 22:10:07.673638172 +0000 UTC m=+1750.099768894
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect test-preload-20210816220706-6487
helpers_test.go:236: (dbg) docker inspect test-preload-20210816220706-6487:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bda1841c15956b9aaaa76d0c611edce9177b941c123077c40e638b5d3a796b97",
	        "Created": "2021-08-16T22:07:07.859350558Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 130149,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-16T22:07:08.417149662Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/bda1841c15956b9aaaa76d0c611edce9177b941c123077c40e638b5d3a796b97/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bda1841c15956b9aaaa76d0c611edce9177b941c123077c40e638b5d3a796b97/hostname",
	        "HostsPath": "/var/lib/docker/containers/bda1841c15956b9aaaa76d0c611edce9177b941c123077c40e638b5d3a796b97/hosts",
	        "LogPath": "/var/lib/docker/containers/bda1841c15956b9aaaa76d0c611edce9177b941c123077c40e638b5d3a796b97/bda1841c15956b9aaaa76d0c611edce9177b941c123077c40e638b5d3a796b97-json.log",
	        "Name": "/test-preload-20210816220706-6487",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "test-preload-20210816220706-6487:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "test-preload-20210816220706-6487",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/014a3cd08a7bc00aa44c41f4cb11286d488aca9f4a30f43452bbf51c652d9fa6-init/diff:/var/lib/docker/overlay2/26ccb05b45c39eb8fb9a222c35182ae0b2281818311893cc1c820d93f21711ae/diff:/var/lib/docker/overlay2/cd717b1332cc33945d2b9d777346faa34d6bdfa47023daf36a4373532eccf421/diff:/var/lib/docker/overlay2/d9b45e6cc99a52e8f81dfe312587e9103ceac6c138634c90ec0d647cd90dcdc6/diff:/var/lib/docker/overlay2/253a269adcaa7871d92fb351ceb4d32421104cd62f741ba1c63d8e3f2e19a0b5/diff:/var/lib/docker/overlay2/42adedc2a706910fdba591e62cbfab57dd03d85b882f6eef38c3e131a0c52bea/diff:/var/lib/docker/overlay2/77ed2008192a16e91da5c6b9ccc6e71cea338a7cb24d331dc695e5f74fb04573/diff:/var/lib/docker/overlay2/29dccf868a296ce0889c929159fd3ac0dcdc00ba6e2ef864b056335954ffab4d/diff:/var/lib/docker/overlay2/592818f0a9ab8c99c2438562d9c5a479083cb08f41ad30ddefa9b493a8098150/diff:/var/lib/docker/overlay2/05fa44df096fbdb60f935bef69e952369e10cd8f1adc570c170c1f08d09743f7/diff:/var/lib/docker/overlay2/65cf48
1c471f231a663edc6331272317a851b0fd3395b2d6be43899f9b5443f9/diff:/var/lib/docker/overlay2/86a49202a7a2abea1a7a57c52f2e19cf991e9319b11dcf7f4105892413e0096e/diff:/var/lib/docker/overlay2/6f1a2a1805e90af512eef163f484935d57b0a0a72a65b5775c38efb5fe937d13/diff:/var/lib/docker/overlay2/f514eb992d78367550d2ce1c020771f984eff047302d6af3dcd71172270c0087/diff:/var/lib/docker/overlay2/cdae383a0302a2ed835985028caad813d6e16c7caf0e146005b47a4a1e94369b/diff:/var/lib/docker/overlay2/1fe02987c4f18db439e026431c44d1b8b233da766c4dcd331a4c20fd91e88748/diff:/var/lib/docker/overlay2/28bb6b828668bb682f3523f9f71fbd429ef6180cd98455625ea014ef4e532b77/diff:/var/lib/docker/overlay2/2878ae01f9dccb219097c2fc5873be677fe8a70e2bd936d25f018483a3d6c1a3/diff:/var/lib/docker/overlay2/da86efa9bad2d8b46c88440b1c4864add391f7cc9ee07d4227dd37d76c9ffd85/diff:/var/lib/docker/overlay2/93bf17ee1f2be70c70aa38b3aea9daa3a62e189181d1aa56acc52f5cf90f7436/diff:/var/lib/docker/overlay2/e8db603cd7ef789cc310bf7782375ee624d80ac0bceb3a3075900e67540c152a/diff:/var/lib/d
ocker/overlay2/1ba1c50d72480f49b835b34db6c3b21bccdf6530a8201631099a7e287dbf94a6/diff:/var/lib/docker/overlay2/2eeb0aecf508bc4a797d240b330bfa5b54b84a085737a96dad9faf9ea129d4ce/diff:/var/lib/docker/overlay2/40ba0dc970df76cce520350594f86384e121b47a190106559752e8f7cf138540/diff:/var/lib/docker/overlay2/8b80e2d0e53fb136919fd9ca7d4d198a92c324a19aa1827fcd672a3aa49a0dcc/diff:/var/lib/docker/overlay2/bd1d10c9a8876242ad61e3841c439177c3ae3a1967e5b053729107676134c622/diff:/var/lib/docker/overlay2/1f65ee5880c0ce6e027f2e72118cb956fcdcba9bb0806d98de6532bcbdfac6dd/diff:/var/lib/docker/overlay2/8bde131df5f26fa8ad65d4ddcb9ddb84de567427ffeb4aee76487489e530ef77/diff:/var/lib/docker/overlay2/73a68cda04f65298e5bd2581f0f16a8de96cd42145968d5c39b0c8ae3228a423/diff:/var/lib/docker/overlay2/0acfb650c0dfd3b9df005fe48f305b155d61d21914271c258e95bf42636e9bf8/diff:/var/lib/docker/overlay2/33e44b42ef88ffe285cb67e240248a629167dd1c83362d51a67679cbd866c30b/diff:/var/lib/docker/overlay2/c71ddf9e0475e4a3987bf3d616723e9d055a79c4bb6d92cf34fc67be2d9
c5134/diff:/var/lib/docker/overlay2/faece61d8cf45bab42cd499befcdb8d4548229311b35600cced068a3ce186025/diff:/var/lib/docker/overlay2/5e45db6ab7620f8baf25ff5ff01c806cd64575cdc89e5bb2965a25800093d5f9/diff:/var/lib/docker/overlay2/7324fe47bf39776cd61539f09f63fbb6f5e17aedc985bcc48bc12cdaaff4c4e4/diff:/var/lib/docker/overlay2/98a50fa9fa33654b2ed4ce8f7118f04d7004260519abca0d403b2ab337e9c273/diff:/var/lib/docker/overlay2/0e38707c109f3f1c54c7190021d525f898a86b07f2e51c86dc3e6d916eff3ed7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/014a3cd08a7bc00aa44c41f4cb11286d488aca9f4a30f43452bbf51c652d9fa6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/014a3cd08a7bc00aa44c41f4cb11286d488aca9f4a30f43452bbf51c652d9fa6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/014a3cd08a7bc00aa44c41f4cb11286d488aca9f4a30f43452bbf51c652d9fa6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "test-preload-20210816220706-6487",
	                "Source": "/var/lib/docker/volumes/test-preload-20210816220706-6487/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-20210816220706-6487",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-20210816220706-6487",
	                "name.minikube.sigs.k8s.io": "test-preload-20210816220706-6487",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "31bf338c92b4c01427e75d374b066c8f873bdcf8f106596a890552d4b734cd74",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32857"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32856"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32853"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32855"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32854"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/31bf338c92b4",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-20210816220706-6487": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "bda1841c1595"
	                    ],
	                    "NetworkID": "982cbd97a25b48c5b15f3dc7efcd01a3e657db4f2a557bfd02d4830ce7d8231d",
	                    "EndpointID": "3f6858a3a78e1e5a03407076f68c477ce3eacfe59d4b34be2fdd03dbe635dd37",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-20210816220706-6487 -n test-preload-20210816220706-6487
helpers_test.go:245: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-20210816220706-6487 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p test-preload-20210816220706-6487 logs -n 25: (1.01417983s)
helpers_test.go:253: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-----------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |              Profile              |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|-----------------------------------|---------|---------|-------------------------------|-------------------------------|
	| kubectl | -p                                                         | multinode-20210816215712-6487     | jenkins | v1.22.0 | Mon, 16 Aug 2021 21:59:14 UTC | Mon, 16 Aug 2021 21:59:14 UTC |
	|         | multinode-20210816215712-6487                              |                                   |         |         |                               |                               |
	|         | -- exec                                                    |                                   |         |         |                               |                               |
	|         | busybox-84b6686758-v4kzv                                   |                                   |         |         |                               |                               |
	|         | -- sh -c nslookup                                          |                                   |         |         |                               |                               |
	|         | host.minikube.internal | awk                               |                                   |         |         |                               |                               |
	|         | 'NR==5' | cut -d' ' -f3                                    |                                   |         |         |                               |                               |
	| -p      | multinode-20210816215712-6487                              | multinode-20210816215712-6487     | jenkins | v1.22.0 | Mon, 16 Aug 2021 21:59:15 UTC | Mon, 16 Aug 2021 21:59:16 UTC |
	|         | logs -n 25                                                 |                                   |         |         |                               |                               |
	| node    | add -p                                                     | multinode-20210816215712-6487     | jenkins | v1.22.0 | Mon, 16 Aug 2021 21:59:17 UTC | Mon, 16 Aug 2021 21:59:42 UTC |
	|         | multinode-20210816215712-6487                              |                                   |         |         |                               |                               |
	|         | -v 3 --alsologtostderr                                     |                                   |         |         |                               |                               |
	| profile | list --output json                                         | minikube                          | jenkins | v1.22.0 | Mon, 16 Aug 2021 21:59:43 UTC | Mon, 16 Aug 2021 21:59:43 UTC |
	| -p      | multinode-20210816215712-6487                              | multinode-20210816215712-6487     | jenkins | v1.22.0 | Mon, 16 Aug 2021 21:59:44 UTC | Mon, 16 Aug 2021 21:59:44 UTC |
	|         | cp testdata/cp-test.txt                                    |                                   |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                   |                                   |         |         |                               |                               |
	| -p      | multinode-20210816215712-6487                              | multinode-20210816215712-6487     | jenkins | v1.22.0 | Mon, 16 Aug 2021 21:59:44 UTC | Mon, 16 Aug 2021 21:59:44 UTC |
	|         | ssh sudo cat                                               |                                   |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                   |                                   |         |         |                               |                               |
	| -p      | multinode-20210816215712-6487 cp testdata/cp-test.txt      | multinode-20210816215712-6487     | jenkins | v1.22.0 | Mon, 16 Aug 2021 21:59:44 UTC | Mon, 16 Aug 2021 21:59:44 UTC |
	|         | multinode-20210816215712-6487-m02:/home/docker/cp-test.txt |                                   |         |         |                               |                               |
	| -p      | multinode-20210816215712-6487                              | multinode-20210816215712-6487     | jenkins | v1.22.0 | Mon, 16 Aug 2021 21:59:44 UTC | Mon, 16 Aug 2021 21:59:45 UTC |
	|         | ssh -n                                                     |                                   |         |         |                               |                               |
	|         | multinode-20210816215712-6487-m02                          |                                   |         |         |                               |                               |
	|         | sudo cat /home/docker/cp-test.txt                          |                                   |         |         |                               |                               |
	| -p      | multinode-20210816215712-6487 cp testdata/cp-test.txt      | multinode-20210816215712-6487     | jenkins | v1.22.0 | Mon, 16 Aug 2021 21:59:45 UTC | Mon, 16 Aug 2021 21:59:45 UTC |
	|         | multinode-20210816215712-6487-m03:/home/docker/cp-test.txt |                                   |         |         |                               |                               |
	| -p      | multinode-20210816215712-6487                              | multinode-20210816215712-6487     | jenkins | v1.22.0 | Mon, 16 Aug 2021 21:59:45 UTC | Mon, 16 Aug 2021 21:59:45 UTC |
	|         | ssh -n                                                     |                                   |         |         |                               |                               |
	|         | multinode-20210816215712-6487-m03                          |                                   |         |         |                               |                               |
	|         | sudo cat /home/docker/cp-test.txt                          |                                   |         |         |                               |                               |
	| -p      | multinode-20210816215712-6487                              | multinode-20210816215712-6487     | jenkins | v1.22.0 | Mon, 16 Aug 2021 21:59:45 UTC | Mon, 16 Aug 2021 21:59:46 UTC |
	|         | node stop m03                                              |                                   |         |         |                               |                               |
	| -p      | multinode-20210816215712-6487                              | multinode-20210816215712-6487     | jenkins | v1.22.0 | Mon, 16 Aug 2021 21:59:48 UTC | Mon, 16 Aug 2021 22:00:20 UTC |
	|         | node start m03                                             |                                   |         |         |                               |                               |
	|         | --alsologtostderr                                          |                                   |         |         |                               |                               |
	| stop    | -p                                                         | multinode-20210816215712-6487     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:00:21 UTC | Mon, 16 Aug 2021 22:01:03 UTC |
	|         | multinode-20210816215712-6487                              |                                   |         |         |                               |                               |
	| start   | -p                                                         | multinode-20210816215712-6487     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:01:03 UTC | Mon, 16 Aug 2021 22:02:58 UTC |
	|         | multinode-20210816215712-6487                              |                                   |         |         |                               |                               |
	|         | --wait=true -v=8                                           |                                   |         |         |                               |                               |
	|         | --alsologtostderr                                          |                                   |         |         |                               |                               |
	| -p      | multinode-20210816215712-6487                              | multinode-20210816215712-6487     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:02:58 UTC | Mon, 16 Aug 2021 22:03:02 UTC |
	|         | node delete m03                                            |                                   |         |         |                               |                               |
	| -p      | multinode-20210816215712-6487                              | multinode-20210816215712-6487     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:03:03 UTC | Mon, 16 Aug 2021 22:03:44 UTC |
	|         | stop                                                       |                                   |         |         |                               |                               |
	| start   | -p                                                         | multinode-20210816215712-6487     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:03:44 UTC | Mon, 16 Aug 2021 22:04:54 UTC |
	|         | multinode-20210816215712-6487                              |                                   |         |         |                               |                               |
	|         | --wait=true -v=8                                           |                                   |         |         |                               |                               |
	|         | --alsologtostderr                                          |                                   |         |         |                               |                               |
	|         | --driver=docker                                            |                                   |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                   |         |         |                               |                               |
	| start   | -p                                                         | multinode-20210816215712-6487-m03 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:04:55 UTC | Mon, 16 Aug 2021 22:05:21 UTC |
	|         | multinode-20210816215712-6487-m03                          |                                   |         |         |                               |                               |
	|         | --driver=docker                                            |                                   |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                   |         |         |                               |                               |
	| delete  | -p                                                         | multinode-20210816215712-6487-m03 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:05:22 UTC | Mon, 16 Aug 2021 22:05:25 UTC |
	|         | multinode-20210816215712-6487-m03                          |                                   |         |         |                               |                               |
	| -p      | multinode-20210816215712-6487                              | multinode-20210816215712-6487     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:05:25 UTC | Mon, 16 Aug 2021 22:05:26 UTC |
	|         | logs -n 25                                                 |                                   |         |         |                               |                               |
	| delete  | -p                                                         | multinode-20210816215712-6487     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:05:26 UTC | Mon, 16 Aug 2021 22:05:31 UTC |
	|         | multinode-20210816215712-6487                              |                                   |         |         |                               |                               |
	| start   | -p                                                         | test-preload-20210816220706-6487  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:07:06 UTC | Mon, 16 Aug 2021 22:08:59 UTC |
	|         | test-preload-20210816220706-6487                           |                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                   |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                   |         |         |                               |                               |
	|         | --driver=docker                                            |                                   |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0                               |                                   |         |         |                               |                               |
	| ssh     | -p                                                         | test-preload-20210816220706-6487  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:09:00 UTC | Mon, 16 Aug 2021 22:09:00 UTC |
	|         | test-preload-20210816220706-6487                           |                                   |         |         |                               |                               |
	|         | -- sudo crictl pull busybox                                |                                   |         |         |                               |                               |
	| start   | -p                                                         | test-preload-20210816220706-6487  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:09:00 UTC | Mon, 16 Aug 2021 22:10:07 UTC |
	|         | test-preload-20210816220706-6487                           |                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                   |         |         |                               |                               |
	|         | -v=1 --wait=true --driver=docker                           |                                   |         |         |                               |                               |
	|         |  --container-runtime=crio                                  |                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3                               |                                   |         |         |                               |                               |
	| ssh     | -p                                                         | test-preload-20210816220706-6487  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:10:07 UTC | Mon, 16 Aug 2021 22:10:07 UTC |
	|         | test-preload-20210816220706-6487                           |                                   |         |         |                               |                               |
	|         | -- sudo crictl image ls                                    |                                   |         |         |                               |                               |
	|---------|------------------------------------------------------------|-----------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:09:00
	Running on machine: debian-jenkins-agent-13
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:09:00.964195  135006 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:09:00.964267  135006 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:09:00.964274  135006 out.go:311] Setting ErrFile to fd 2...
	I0816 22:09:00.964278  135006 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:09:00.964391  135006 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:09:00.964599  135006 out.go:305] Setting JSON to false
	I0816 22:09:00.999399  135006 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-13","uptime":2908,"bootTime":1629148833,"procs":202,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:09:00.999487  135006 start.go:121] virtualization: kvm guest
	I0816 22:09:01.002615  135006 out.go:177] * [test-preload-20210816220706-6487] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:09:01.004185  135006 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:09:01.002794  135006 notify.go:169] Checking for updates...
	I0816 22:09:01.005742  135006 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:09:01.007156  135006 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:09:01.008458  135006 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:09:01.008880  135006 config.go:177] Loaded profile config "test-preload-20210816220706-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0816 22:09:01.010643  135006 out.go:177] * Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	I0816 22:09:01.010674  135006 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:09:01.058333  135006 docker.go:132] docker version: linux-19.03.15
	I0816 22:09:01.058446  135006 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:09:01.134804  135006 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2021-08-16 22:09:01.092347822 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 22:09:01.134906  135006 docker.go:244] overlay module found
	I0816 22:09:01.137165  135006 out.go:177] * Using the docker driver based on existing profile
	I0816 22:09:01.137187  135006 start.go:278] selected driver: docker
	I0816 22:09:01.137193  135006 start.go:751] validating driver "docker" against &{Name:test-preload-20210816220706-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20210816220706-6487 Namespace:default APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.17.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:09:01.137294  135006 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0816 22:09:01.137341  135006 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:09:01.137358  135006 out.go:242] ! Your cgroup does not allow setting memory.
	I0816 22:09:01.139000  135006 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:09:01.139793  135006 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:09:01.217672  135006 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2021-08-16 22:09:01.175055007 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0816 22:09:01.217831  135006 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:09:01.217859  135006 out.go:242] ! Your cgroup does not allow setting memory.
	I0816 22:09:01.220126  135006 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:09:01.220210  135006 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 22:09:01.220230  135006 cni.go:93] Creating CNI manager for ""
	I0816 22:09:01.220241  135006 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:09:01.220251  135006 start_flags.go:277] config:
	{Name:test-preload-20210816220706-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.3 ClusterName:test-preload-20210816220706-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.17.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:09:01.221968  135006 out.go:177] * Starting control plane node test-preload-20210816220706-6487 in cluster test-preload-20210816220706-6487
	I0816 22:09:01.221999  135006 cache.go:117] Beginning downloading kic base image for docker with crio
	I0816 22:09:01.223611  135006 out.go:177] * Pulling base image ...
	I0816 22:09:01.223641  135006 preload.go:131] Checking if preload exists for k8s version v1.17.3 and runtime crio
	I0816 22:09:01.223728  135006 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	W0816 22:09:01.289944  135006 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.17.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0816 22:09:01.290150  135006 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/test-preload-20210816220706-6487/config.json ...
	I0816 22:09:01.290387  135006 cache.go:108] acquiring lock: {Name:mke3d64dcf3270420cc281e6a6befd30594c50fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:09:01.290389  135006 cache.go:108] acquiring lock: {Name:mkf050274ef6cbae73e5d2c3bc2df9d2eaaad8fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:09:01.290452  135006 cache.go:108] acquiring lock: {Name:mkf6748f18d8464f93b913a77ff0a27571f3e217 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:09:01.290457  135006 cache.go:108] acquiring lock: {Name:mkd82ea648b841d96f18b36063bee48717854ca6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:09:01.290507  135006 cache.go:108] acquiring lock: {Name:mke52842983c762561b8af69308d41f8fffc4376 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:09:01.290556  135006 cache.go:108] acquiring lock: {Name:mk0b84fbea34d74cc2da16fdbda169da7718e6bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:09:01.290564  135006 cache.go:108] acquiring lock: {Name:mk56a36db721d312d0c6ac2916d98cbcf3205ce4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:09:01.290610  135006 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0816 22:09:01.290617  135006 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0816 22:09:01.290562  135006 cache.go:108] acquiring lock: {Name:mkd757956ba096c9c6c2faef405bc87f0df51e15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:09:01.290635  135006 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 258.834µs
	I0816 22:09:01.290570  135006 cache.go:108] acquiring lock: {Name:mka468f4a786ecbd6f4b20db65e89596ce7f2801 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:09:01.290635  135006 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 81.777µs
	I0816 22:09:01.290650  135006 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0816 22:09:01.290641  135006 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.1 exists
	I0816 22:09:01.290660  135006 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0816 22:09:01.290636  135006 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.3
	I0816 22:09:01.290676  135006 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 exists
	I0816 22:09:01.290690  135006 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5 exists
	I0816 22:09:01.290698  135006 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0" took 255.298µs
	I0816 22:09:01.290706  135006 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.17.3
	I0816 22:09:01.290706  135006 cache.go:97] cache image "k8s.gcr.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5" took 259.861µs
	I0816 22:09:01.290712  135006 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 succeeded
	I0816 22:09:01.290678  135006 cache.go:97] cache image "k8s.gcr.io/pause:3.1" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.1" took 297.527µs
	I0816 22:09:01.290729  135006 cache.go:81] save to tar file k8s.gcr.io/pause:3.1 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.1 succeeded
	I0816 22:09:01.290721  135006 cache.go:81] save to tar file k8s.gcr.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5 succeeded
	I0816 22:09:01.290769  135006 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0816 22:09:01.290791  135006 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 290.592µs
	I0816 22:09:01.290807  135006 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0816 22:09:01.290832  135006 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.3
	I0816 22:09:01.291235  135006 cache.go:108] acquiring lock: {Name:mk11c68e02fd22231d2b36979c48a4b133042ad4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:09:01.291423  135006 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.3
	I0816 22:09:01.291623  135006 image.go:175] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.3: Error response from daemon: reference does not exist
	I0816 22:09:01.291623  135006 image.go:175] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.3: Error response from daemon: reference does not exist
	I0816 22:09:01.291624  135006 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.3: Error response from daemon: reference does not exist
	I0816 22:09:01.291919  135006 image.go:175] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.3: Error response from daemon: reference does not exist
	I0816 22:09:01.313216  135006 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0816 22:09:01.313240  135006 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0816 22:09:01.313260  135006 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:09:01.313290  135006 start.go:313] acquiring machines lock for test-preload-20210816220706-6487: {Name:mk8a57ee8f9579af6a24b7e0a44c339c624adf84 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:09:01.313363  135006 start.go:317] acquired machines lock for "test-preload-20210816220706-6487" in 60.037µs
	I0816 22:09:01.313384  135006 start.go:93] Skipping create...Using existing machine configuration
	I0816 22:09:01.313395  135006 fix.go:55] fixHost starting: 
	I0816 22:09:01.313616  135006 cli_runner.go:115] Run: docker container inspect test-preload-20210816220706-6487 --format={{.State.Status}}
	I0816 22:09:01.352127  135006 fix.go:108] recreateIfNeeded on test-preload-20210816220706-6487: state=Running err=<nil>
	W0816 22:09:01.352170  135006 fix.go:134] unexpected machine state, will restart: <nil>
	I0816 22:09:01.354910  135006 out.go:177] * Updating the running docker "test-preload-20210816220706-6487" container ...
	I0816 22:09:01.354942  135006 machine.go:88] provisioning docker machine ...
	I0816 22:09:01.354961  135006 ubuntu.go:169] provisioning hostname "test-preload-20210816220706-6487"
	I0816 22:09:01.355018  135006 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210816220706-6487
	I0816 22:09:01.393567  135006 main.go:130] libmachine: Using SSH client type: native
	I0816 22:09:01.393732  135006 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0816 22:09:01.393749  135006 main.go:130] libmachine: About to run SSH command:
	sudo hostname test-preload-20210816220706-6487 && echo "test-preload-20210816220706-6487" | sudo tee /etc/hostname
	I0816 22:09:01.522865  135006 main.go:130] libmachine: SSH cmd err, output: <nil>: test-preload-20210816220706-6487
	
	I0816 22:09:01.522952  135006 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210816220706-6487
	I0816 22:09:01.562394  135006 main.go:130] libmachine: Using SSH client type: native
	I0816 22:09:01.562558  135006 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0816 22:09:01.562586  135006 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-20210816220706-6487' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-20210816220706-6487/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-20210816220706-6487' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:09:01.675242  135006 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3
	I0816 22:09:01.679443  135006 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3
	I0816 22:09:01.679795  135006 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3
	I0816 22:09:01.682289  135006 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3
	I0816 22:09:01.683273  135006 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:09:01.683296  135006 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem ServerCertRemotePath:/e
tc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:09:01.683313  135006 ubuntu.go:177] setting up certificates
	I0816 22:09:01.683321  135006 provision.go:83] configureAuth start
	I0816 22:09:01.683359  135006 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20210816220706-6487
	I0816 22:09:01.721242  135006 provision.go:138] copyHostCerts
	I0816 22:09:01.721294  135006 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:09:01.721303  135006 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:09:01.721361  135006 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:09:01.721463  135006 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:09:01.721477  135006 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:09:01.721511  135006 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:09:01.721570  135006 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:09:01.721586  135006 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:09:01.721613  135006 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0816 22:09:01.721659  135006 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.test-preload-20210816220706-6487 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-20210816220706-6487]
	I0816 22:09:01.778926  135006 provision.go:172] copyRemoteCerts
	I0816 22:09:01.778976  135006 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:09:01.779020  135006 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210816220706-6487
	I0816 22:09:01.817044  135006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/test-preload-20210816220706-6487/id_rsa Username:docker}
	I0816 22:09:01.911553  135006 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0816 22:09:01.930669  135006 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 22:09:01.949426  135006 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:09:01.966537  135006 provision.go:86] duration metric: configureAuth took 283.206091ms
	I0816 22:09:01.966557  135006 ubuntu.go:193] setting minikube options for container-runtime
	I0816 22:09:01.966732  135006 config.go:177] Loaded profile config "test-preload-20210816220706-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.17.3
	I0816 22:09:01.966894  135006 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210816220706-6487
	I0816 22:09:02.016549  135006 main.go:130] libmachine: Using SSH client type: native
	I0816 22:09:02.016702  135006 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0816 22:09:02.016718  135006 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 22:09:02.528911  135006 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3 exists
	I0816 22:09:02.528961  135006 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.17.3" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3" took 1.238455618s
	I0816 22:09:02.528986  135006 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.17.3 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3 succeeded
	I0816 22:09:02.595303  135006 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3 exists
	I0816 22:09:02.595345  135006 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.17.3" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3" took 1.304172036s
	I0816 22:09:02.595358  135006 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.17.3 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3 succeeded
	I0816 22:09:02.653560  135006 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 22:09:02.653586  135006 machine.go:91] provisioned docker machine in 1.298636313s
	I0816 22:09:02.653597  135006 start.go:267] post-start starting for "test-preload-20210816220706-6487" (driver="docker")
	I0816 22:09:02.653604  135006 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:09:02.653662  135006 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:09:02.653698  135006 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210816220706-6487
	I0816 22:09:02.692261  135006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/test-preload-20210816220706-6487/id_rsa Username:docker}
	I0816 22:09:02.742823  135006 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3 exists
	I0816 22:09:02.742868  135006 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.17.3" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3" took 1.45233408s
	I0816 22:09:02.742884  135006 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.17.3 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3 succeeded
	I0816 22:09:02.786829  135006 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:09:02.789450  135006 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0816 22:09:02.789480  135006 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0816 22:09:02.789493  135006 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0816 22:09:02.789501  135006 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0816 22:09:02.789520  135006 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:09:02.789570  135006 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:09:02.789681  135006 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem -> 64872.pem in /etc/ssl/certs
	I0816 22:09:02.789809  135006 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:09:02.795852  135006 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:09:02.811239  135006 start.go:270] post-start completed in 157.630337ms
	I0816 22:09:02.811291  135006 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 22:09:02.811333  135006 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210816220706-6487
	I0816 22:09:02.850793  135006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/test-preload-20210816220706-6487/id_rsa Username:docker}
	I0816 22:09:02.857728  135006 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3 exists
	I0816 22:09:02.857772  135006 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.17.3" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3" took 1.567209333s
	I0816 22:09:02.857803  135006 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.17.3 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3 succeeded
	I0816 22:09:02.857834  135006 cache.go:88] Successfully saved all images to host disk.
	I0816 22:09:02.936012  135006 fix.go:57] fixHost completed within 1.62261278s
	I0816 22:09:02.936034  135006 start.go:80] releasing machines lock for "test-preload-20210816220706-6487", held for 1.622660097s
	I0816 22:09:02.936110  135006 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20210816220706-6487
	I0816 22:09:02.974964  135006 ssh_runner.go:149] Run: systemctl --version
	I0816 22:09:02.974995  135006 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:09:02.975023  135006 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210816220706-6487
	I0816 22:09:02.975069  135006 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210816220706-6487
	I0816 22:09:03.018643  135006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/test-preload-20210816220706-6487/id_rsa Username:docker}
	I0816 22:09:03.018644  135006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/test-preload-20210816220706-6487/id_rsa Username:docker}
	I0816 22:09:03.139088  135006 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0816 22:09:03.148408  135006 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0816 22:09:03.156538  135006 docker.go:153] disabling docker service ...
	I0816 22:09:03.156586  135006 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:09:03.164793  135006 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:09:03.172713  135006 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:09:03.282426  135006 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:09:03.388686  135006 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:09:03.397066  135006 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:09:03.408718  135006 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.1"|' -i /etc/crio/crio.conf"
	I0816 22:09:03.415894  135006 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0816 22:09:03.415924  135006 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0816 22:09:03.422916  135006 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:09:03.428544  135006 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:09:03.428584  135006 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:09:03.434881  135006 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:09:03.440567  135006 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:09:03.541792  135006 ssh_runner.go:149] Run: sudo systemctl start crio
	I0816 22:09:03.550383  135006 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 22:09:03.550440  135006 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:09:03.553327  135006 start.go:413] Will wait 60s for crictl version
	I0816 22:09:03.553366  135006 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:09:03.579636  135006 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0816 22:09:03.579691  135006 ssh_runner.go:149] Run: crio --version
	I0816 22:09:03.638862  135006 ssh_runner.go:149] Run: crio --version
	I0816 22:09:03.699573  135006 out.go:177] * Preparing Kubernetes v1.17.3 on CRI-O 1.20.3 ...
	I0816 22:09:03.699646  135006 cli_runner.go:115] Run: docker network inspect test-preload-20210816220706-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 22:09:03.736024  135006 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0816 22:09:03.739362  135006 preload.go:131] Checking if preload exists for k8s version v1.17.3 and runtime crio
	I0816 22:09:03.739406  135006 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:09:03.765107  135006 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.17.3". assuming images are not preloaded.
	I0816 22:09:03.765126  135006 cache_images.go:78] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.17.3 k8s.gcr.io/kube-controller-manager:v1.17.3 k8s.gcr.io/kube-scheduler:v1.17.3 k8s.gcr.io/kube-proxy:v1.17.3 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.5 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.1.0 docker.io/kubernetesui/metrics-scraper:v1.0.4]
	I0816 22:09:03.765174  135006 image.go:133] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0816 22:09:03.765194  135006 image.go:133] retrieving image: k8s.gcr.io/pause:3.1
	I0816 22:09:03.765212  135006 image.go:133] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0816 22:09:03.765224  135006 image.go:133] retrieving image: docker.io/kubernetesui/dashboard:v2.1.0
	I0816 22:09:03.765267  135006 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.3
	I0816 22:09:03.765275  135006 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.3
	I0816 22:09:03.765315  135006 image.go:133] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0816 22:09:03.765373  135006 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.17.3
	I0816 22:09:03.765195  135006 image.go:133] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:09:03.765389  135006 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.3
	I0816 22:09:03.766019  135006 image.go:175] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.3: Error response from daemon: reference does not exist
	I0816 22:09:03.766315  135006 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.3: Error response from daemon: reference does not exist
	I0816 22:09:03.772004  135006 image.go:175] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.3: Error response from daemon: reference does not exist
	I0816 22:09:03.780230  135006 image.go:175] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.3: Error response from daemon: reference does not exist
	I0816 22:09:03.780787  135006 image.go:171] found k8s.gcr.io/pause:3.1 locally: &{UncompressedImageCore:0xc000158120 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:09:03.780867  135006 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/pause:3.1
	I0816 22:09:04.092876  135006 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.17.3
	I0816 22:09:04.094044  135006 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.17.3
	I0816 22:09:04.104853  135006 image.go:171] found gcr.io/k8s-minikube/storage-provisioner:v5 locally: &{UncompressedImageCore:0xc0001581f0 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:09:04.104938  135006 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:09:04.112384  135006 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.17.3
	I0816 22:09:04.117853  135006 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.17.3
	I0816 22:09:04.286418  135006 image.go:171] found k8s.gcr.io/coredns:1.6.5 locally: &{UncompressedImageCore:0xc000010028 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:09:04.286536  135006 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.5
	I0816 22:09:04.300938  135006 image.go:171] found index.docker.io/kubernetesui/metrics-scraper:v1.0.4 locally: &{UncompressedImageCore:0xc000010070 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:09:04.301022  135006 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0816 22:09:04.313724  135006 cache_images.go:106] "k8s.gcr.io/kube-scheduler:v1.17.3" needs transfer: "k8s.gcr.io/kube-scheduler:v1.17.3" does not exist at hash "d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad" in container runtime
	I0816 22:09:04.313769  135006 cri.go:205] Removing image: k8s.gcr.io/kube-scheduler:v1.17.3
	I0816 22:09:04.313812  135006 ssh_runner.go:149] Run: which crictl
	I0816 22:09:04.333751  135006 cache_images.go:106] "k8s.gcr.io/kube-controller-manager:v1.17.3" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.17.3" does not exist at hash "b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302" in container runtime
	I0816 22:09:04.333797  135006 cri.go:205] Removing image: k8s.gcr.io/kube-controller-manager:v1.17.3
	I0816 22:09:04.333838  135006 ssh_runner.go:149] Run: which crictl
	I0816 22:09:04.349486  135006 cache_images.go:106] "k8s.gcr.io/kube-apiserver:v1.17.3" needs transfer: "k8s.gcr.io/kube-apiserver:v1.17.3" does not exist at hash "90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b" in container runtime
	I0816 22:09:04.349534  135006 cri.go:205] Removing image: k8s.gcr.io/kube-apiserver:v1.17.3
	I0816 22:09:04.349581  135006 ssh_runner.go:149] Run: which crictl
	I0816 22:09:04.412625  135006 cache_images.go:106] "k8s.gcr.io/kube-proxy:v1.17.3" needs transfer: "k8s.gcr.io/kube-proxy:v1.17.3" does not exist at hash "ae853e93800dc2572aeb425e5765cf9b25212bfc43695299e61dece06cffa4a1" in container runtime
	I0816 22:09:04.412673  135006 cri.go:205] Removing image: k8s.gcr.io/kube-proxy:v1.17.3
	I0816 22:09:04.412714  135006 ssh_runner.go:149] Run: which crictl
	I0816 22:09:04.518362  135006 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.17.3
	I0816 22:09:04.518395  135006 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.17.3
	I0816 22:09:04.518417  135006 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.17.3
	I0816 22:09:04.518479  135006 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.17.3
	I0816 22:09:04.551260  135006 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3
	I0816 22:09:04.551375  135006 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.17.3
	I0816 22:09:04.551862  135006 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3
	I0816 22:09:04.551887  135006 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3
	I0816 22:09:04.551960  135006 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.17.3
	I0816 22:09:04.551974  135006 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.17.3
	I0816 22:09:04.554411  135006 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3
	I0816 22:09:04.554477  135006 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.17.3
	I0816 22:09:04.555368  135006 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-scheduler_v1.17.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.17.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.17.3': No such file or directory
	I0816 22:09:04.555389  135006 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3 --> /var/lib/minikube/images/kube-scheduler_v1.17.3 (33822208 bytes)
	I0816 22:09:04.560025  135006 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-apiserver_v1.17.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.17.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.17.3': No such file or directory
	I0816 22:09:04.560052  135006 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3 --> /var/lib/minikube/images/kube-apiserver_v1.17.3 (50635776 bytes)
	I0816 22:09:04.560122  135006 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-proxy_v1.17.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.17.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.17.3': No such file or directory
	I0816 22:09:04.560137  135006 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3 --> /var/lib/minikube/images/kube-proxy_v1.17.3 (48706048 bytes)
	I0816 22:09:04.560200  135006 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-controller-manager_v1.17.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.17.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.17.3': No such file or directory
	I0816 22:09:04.560220  135006 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3 --> /var/lib/minikube/images/kube-controller-manager_v1.17.3 (48810496 bytes)
	I0816 22:09:04.846805  135006 crio.go:191] Loading image: /var/lib/minikube/images/kube-scheduler_v1.17.3
	I0816 22:09:04.846892  135006 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.17.3
	I0816 22:09:06.250521  135006 image.go:171] found index.docker.io/kubernetesui/dashboard:v2.1.0 locally: &{UncompressedImageCore:0xc000010050 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:09:06.250646  135006 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0
	I0816 22:09:06.746999  135006 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.17.3: (1.900074555s)
	I0816 22:09:06.747021  135006 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3 from cache
	I0816 22:09:06.747045  135006 crio.go:191] Loading image: /var/lib/minikube/images/kube-proxy_v1.17.3
	I0816 22:09:06.747096  135006 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.17.3
	I0816 22:09:06.756821  135006 image.go:171] found k8s.gcr.io/etcd:3.4.3-0 locally: &{UncompressedImageCore:0xc000010050 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:09:06.756900  135006 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0
	I0816 22:09:08.493927  135006 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.17.3: (1.746807434s)
	I0816 22:09:08.493950  135006 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3 from cache
	I0816 22:09:08.493980  135006 crio.go:191] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.17.3
	I0816 22:09:08.494031  135006 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.17.3
	I0816 22:09:08.493981  135006 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0: (1.737062623s)
	I0816 22:09:11.440627  135006 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.17.3: (2.946569322s)
	I0816 22:09:11.440657  135006 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3 from cache
	I0816 22:09:11.440683  135006 crio.go:191] Loading image: /var/lib/minikube/images/kube-apiserver_v1.17.3
	I0816 22:09:11.440720  135006 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.17.3
	I0816 22:09:14.587101  135006 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.17.3: (3.14635619s)
	I0816 22:09:14.587129  135006 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3 from cache
	I0816 22:09:14.587150  135006 cache_images.go:113] Successfully loaded all cached images
	I0816 22:09:14.587156  135006 cache_images.go:82] LoadImages completed in 10.822016694s
	I0816 22:09:14.587217  135006 ssh_runner.go:149] Run: crio config
	I0816 22:09:14.652079  135006 cni.go:93] Creating CNI manager for ""
	I0816 22:09:14.652099  135006 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:09:14.652109  135006 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0816 22:09:14.652120  135006 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.17.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-20210816220706-6487 NodeName:test-preload-20210816220706-6487 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/
var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:09:14.652240  135006 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "test-preload-20210816220706-6487"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.17.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:09:14.652320  135006 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.17.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=test-preload-20210816220706-6487 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.17.3 ClusterName:test-preload-20210816220706-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0816 22:09:14.652365  135006 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.17.3
	I0816 22:09:14.659107  135006 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.17.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.17.3': No such file or directory
	
	Initiating transfer...
	I0816 22:09:14.659147  135006 ssh_runner.go:149] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.17.3
	I0816 22:09:14.665504  135006 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/linux/v1.17.3/kubeadm
	I0816 22:09:14.665506  135006 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/linux/v1.17.3/kubelet
	I0816 22:09:14.665504  135006 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/linux/v1.17.3/kubectl
	I0816 22:09:15.213267  135006 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubeadm
	I0816 22:09:15.216690  135006 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.17.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.3/kubeadm': No such file or directory
	I0816 22:09:15.216725  135006 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/linux/v1.17.3/kubeadm --> /var/lib/minikube/binaries/v1.17.3/kubeadm (39346176 bytes)
	I0816 22:09:15.256137  135006 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubectl
	I0816 22:09:15.264733  135006 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.17.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.3/kubectl': No such file or directory
	I0816 22:09:15.264771  135006 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/linux/v1.17.3/kubectl --> /var/lib/minikube/binaries/v1.17.3/kubectl (43499520 bytes)
	I0816 22:09:15.707944  135006 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:09:15.717234  135006 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:09:15.729558  135006 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubelet
	I0816 22:09:15.732414  135006 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.17.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.3/kubelet': No such file or directory
	I0816 22:09:15.732456  135006 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/linux/v1.17.3/kubelet --> /var/lib/minikube/binaries/v1.17.3/kubelet (111584792 bytes)
	I0816 22:09:15.915397  135006 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:09:15.921756  135006 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (563 bytes)
	I0816 22:09:15.941862  135006 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 22:09:15.953000  135006 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2073 bytes)
	I0816 22:09:15.964207  135006 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0816 22:09:15.966889  135006 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/test-preload-20210816220706-6487 for IP: 192.168.49.2
	I0816 22:09:15.966933  135006 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:09:15.966947  135006 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:09:15.967000  135006 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/test-preload-20210816220706-6487/client.key
	I0816 22:09:15.967017  135006 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/test-preload-20210816220706-6487/apiserver.key.dd3b5fb2
	I0816 22:09:15.967034  135006 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/test-preload-20210816220706-6487/proxy-client.key
	I0816 22:09:15.967132  135006 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem (1338 bytes)
	W0816 22:09:15.967186  135006 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487_empty.pem, impossibly tiny 0 bytes
	I0816 22:09:15.967202  135006 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 22:09:15.967230  135006 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:09:15.967253  135006 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:09:15.967284  135006 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0816 22:09:15.967334  135006 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:09:15.968359  135006 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/test-preload-20210816220706-6487/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:09:15.983553  135006 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/test-preload-20210816220706-6487/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 22:09:15.998577  135006 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/test-preload-20210816220706-6487/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:09:16.013537  135006 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/test-preload-20210816220706-6487/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 22:09:16.028724  135006 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:09:16.043515  135006 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 22:09:16.058067  135006 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:09:16.073089  135006 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 22:09:16.087848  135006 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:09:16.102955  135006 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem --> /usr/share/ca-certificates/6487.pem (1338 bytes)
	I0816 22:09:16.117716  135006 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /usr/share/ca-certificates/64872.pem (1708 bytes)
	I0816 22:09:16.132481  135006 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:09:16.143412  135006 ssh_runner.go:149] Run: openssl version
	I0816 22:09:16.147845  135006 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:09:16.154411  135006 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:09:16.157131  135006 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:09:16.157179  135006 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:09:16.161586  135006 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:09:16.167856  135006 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6487.pem && ln -fs /usr/share/ca-certificates/6487.pem /etc/ssl/certs/6487.pem"
	I0816 22:09:16.174563  135006 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6487.pem
	I0816 22:09:16.177340  135006 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:50 /usr/share/ca-certificates/6487.pem
	I0816 22:09:16.177376  135006 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6487.pem
	I0816 22:09:16.181727  135006 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6487.pem /etc/ssl/certs/51391683.0"
	I0816 22:09:16.187662  135006 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/64872.pem && ln -fs /usr/share/ca-certificates/64872.pem /etc/ssl/certs/64872.pem"
	I0816 22:09:16.194024  135006 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/64872.pem
	I0816 22:09:16.196757  135006 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:50 /usr/share/ca-certificates/64872.pem
	I0816 22:09:16.196804  135006 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/64872.pem
	I0816 22:09:16.201076  135006 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/64872.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:09:16.207335  135006 kubeadm.go:390] StartCluster: {Name:test-preload-20210816220706-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.3 ClusterName:test-preload-20210816220706-6487 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.17.3 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:09:16.207402  135006 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 22:09:16.207435  135006 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:09:16.229656  135006 cri.go:76] found id: "febcfcb711ef632bf1215fc8187796fa35fadbb6d235db09a9d2951e0b27c44a"
	I0816 22:09:16.229680  135006 cri.go:76] found id: "ab6cb51a35acc62a44f30f0de47168b66e4f1380d174f9b401da2d7133d33e1a"
	I0816 22:09:16.229686  135006 cri.go:76] found id: "1da8ca16721bb36d63748e336577a0fd07425325eb6ade463a8c4fb9e4debdd7"
	I0816 22:09:16.229690  135006 cri.go:76] found id: "38e217fbaedb05e1041e8c8df911440075e03603905ae5944baa7a43ef8a84e1"
	I0816 22:09:16.229694  135006 cri.go:76] found id: "9bea27bcbbbbffff5ba82ea900470db0d293e82e8e84b670c6953959bca82842"
	I0816 22:09:16.229698  135006 cri.go:76] found id: "66694634fd317233a47b80a64b79c06558b8aba0d05843bd39f5e1c122f36d5e"
	I0816 22:09:16.229702  135006 cri.go:76] found id: "65aed2c6f8e06e2c095d5a1363a4a5e92b5e2932622e09e443e201b030c51756"
	I0816 22:09:16.229706  135006 cri.go:76] found id: "3092b31ecd6bb4312bed487c67745381aef9a261121af23448b73b4d2aebf293"
	I0816 22:09:16.229711  135006 cri.go:76] found id: ""
	I0816 22:09:16.229748  135006 ssh_runner.go:149] Run: sudo runc list -f json
	I0816 22:09:16.266873  135006 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"021f018475a4b63c95f19e00400c1c1498dd61d453af3b9f51ac6528535bec54","pid":2624,"status":"running","bundle":"/run/containers/storage/overlay-containers/021f018475a4b63c95f19e00400c1c1498dd61d453af3b9f51ac6528535bec54/userdata","rootfs":"/var/lib/containers/storage/overlay/3b0d69f7da951c76cbe638ab49107338964508ca766aa67e623f3f0b313cbec3/merged","created":"2021-08-16T22:08:05.75628718Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"df30e34a70f2b5de6ad91ee741a66802\",\"kubernetes.io/config.seen\":\"2021-08-16T22:08:04.618414145Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"021f018475a4b63c95f19e00400c1c1498dd61d453af3b9f51ac6528535bec54","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-test-preload-20210816220706-6487_kube-system_d
f30e34a70f2b5de6ad91ee741a66802_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:08:05.650550701Z","io.kubernetes.cri-o.HostName":"test-preload-20210816220706-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/021f018475a4b63c95f19e00400c1c1498dd61d453af3b9f51ac6528535bec54/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-test-preload-20210816220706-6487","io.kubernetes.cri-o.Labels":"{\"component\":\"etcd\",\"io.kubernetes.pod.uid\":\"df30e34a70f2b5de6ad91ee741a66802\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.name\":\"etcd-test-preload-20210816220706-6487\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-test-preload-20210816220706-6487_df30e34a70f2b5de6ad91ee741a66802/021f018475a4b63c95f19e00400c1c1498dd61d453af3b9f51ac6528
535bec54.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-test-preload-20210816220706-6487\",\"uid\":\"df30e34a70f2b5de6ad91ee741a66802\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3b0d69f7da951c76cbe638ab49107338964508ca766aa67e623f3f0b313cbec3/merged","io.kubernetes.cri-o.Name":"k8s_etcd-test-preload-20210816220706-6487_kube-system_df30e34a70f2b5de6ad91ee741a66802_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/021f018475a4b63c95f19e00400c1c1498dd61d453af3b9f51ac6528535bec54/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"021f018475a4b63c95f19e00400c1c1498dd61d453af3b9f51ac6528535bec54","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/conta
iners/storage/overlay-containers/021f018475a4b63c95f19e00400c1c1498dd61d453af3b9f51ac6528535bec54/userdata/shm","io.kubernetes.pod.name":"etcd-test-preload-20210816220706-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"df30e34a70f2b5de6ad91ee741a66802","kubernetes.io/config.hash":"df30e34a70f2b5de6ad91ee741a66802","kubernetes.io/config.seen":"2021-08-16T22:08:04.618414145Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1da8ca16721bb36d63748e336577a0fd07425325eb6ade463a8c4fb9e4debdd7","pid":3836,"status":"running","bundle":"/run/containers/storage/overlay-containers/1da8ca16721bb36d63748e336577a0fd07425325eb6ade463a8c4fb9e4debdd7/userdata","rootfs":"/var/lib/containers/storage/overlay/cef1240112b5ea905cffad699b99a4cd27a69cb577bbece36b0cef19afbe96dc/merged","created":"2021-08-16T22:08:29.344158902Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.contai
ner.hash":"4f125b72","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"4f125b72\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1da8ca16721bb36d63748e336577a0fd07425325eb6ade463a8c4fb9e4debdd7","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:08:29.157472163Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d
867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"fae42d16-b932-4470-9cd3-c2111bdc755b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_fae42d16-b932-4470-9cd3-c2111bdc755b/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/cef1240112b5ea905cffad699b99a4cd27a69cb577bbece36b0cef19afbe96dc/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_fae42d16-b932-4470-9cd3-c2111bdc755b_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/7b3a19b753e8f4e415e41dbf57966f6d519de7ab9926d6d0dffd9215e629e3cf/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"7b3a19b753e8f4e415e41dbf57966f6d519de7ab9926d6d0dffd9215e629e3cf","io
.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_fae42d16-b932-4470-9cd3-c2111bdc755b_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/fae42d16-b932-4470-9cd3-c2111bdc755b/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/fae42d16-b932-4470-9cd3-c2111bdc755b/containers/storage-provisioner/209f03b8\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/fae42d16-b932-4470-9cd3-c2111bdc755b/volumes/kubernetes.io~secret/storage-provisioner-token-x2b78\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes
.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"fae42d16-b932-4470-9cd3-c2111bdc755b","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-16T22:08:28.714001672Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"}
,"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1e64d6ee9479283ea365aeca2b5f7a63051c55edd64d18ddb83a155d0c5365f6","pid":2648,"status":"running","bundle":"/run/containers/storage/overlay-containers/1e64d6ee9479283ea365aeca2b5f7a63051c55edd64d18ddb83a155d0c5365f6/userdata","rootfs":"/var/lib/containers/storage/overlay/0dd78799f0e827563fd51ec07e42c05d892ad5f5d7b542a9c5c41aaea00c1a59/merged","created":"2021-08-16T22:08:05.768117634Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"01e1f4e495c3311ccc20368c1e385f74\",\"kubernetes.io/config.seen\":\"2021-08-16T22:08:04.618421332Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"1e64d6ee9479283ea365aeca2b5f7a63051c55edd64d18ddb83a155d0c5365f6","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-test-preload-20210816220706-6487_kube-system_01e1
f4e495c3311ccc20368c1e385f74_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:08:05.648616523Z","io.kubernetes.cri-o.HostName":"test-preload-20210816220706-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/1e64d6ee9479283ea365aeca2b5f7a63051c55edd64d18ddb83a155d0c5365f6/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-manager-test-preload-20210816220706-6487","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"01e1f4e495c3311ccc20368c1e385f74\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-test-preload-20210816220706-6487\",\"tier\":\"control-plane\",\"component\":\"kube-controller-manager\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-test-preload-20210816220706-6487_01e1f4e495c3311c
cc20368c1e385f74/1e64d6ee9479283ea365aeca2b5f7a63051c55edd64d18ddb83a155d0c5365f6.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-test-preload-20210816220706-6487\",\"uid\":\"01e1f4e495c3311ccc20368c1e385f74\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0dd78799f0e827563fd51ec07e42c05d892ad5f5d7b542a9c5c41aaea00c1a59/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-test-preload-20210816220706-6487_kube-system_01e1f4e495c3311ccc20368c1e385f74_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/1e64d6ee9479283ea365aeca2b5f7a63051c55edd64d18ddb83a155d0c5365f6/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"1e64d6ee9479283ea365aeca2b5f7a63051c55ed
d64d18ddb83a155d0c5365f6","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/1e64d6ee9479283ea365aeca2b5f7a63051c55edd64d18ddb83a155d0c5365f6/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-test-preload-20210816220706-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"01e1f4e495c3311ccc20368c1e385f74","kubernetes.io/config.hash":"01e1f4e495c3311ccc20368c1e385f74","kubernetes.io/config.seen":"2021-08-16T22:08:04.618421332Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3092b31ecd6bb4312bed487c67745381aef9a261121af23448b73b4d2aebf293","pid":2768,"status":"running","bundle":"/run/containers/storage/overlay-containers/3092b31ecd6bb4312bed487c67745381aef9a261121af23448b73b4d2aebf293/userdata","rootfs":"/var/lib/containers/storage/overlay/cf393510efb7d31fc99800e7a4023b830fcab0a2a0dcd9c0ce154
81a2623c1db/merged","created":"2021-08-16T22:08:06.176105794Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ec604138","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ec604138\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3092b31ecd6bb4312bed487c67745381aef9a261121af23448b73b4d2aebf293","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:08:05.917975711Z","io.kubernetes.cri-o.Image":"5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056","io.kubernetes.
cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.17.0","io.kubernetes.cri-o.ImageRef":"5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-test-preload-20210816220706-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"01e1f4e495c3311ccc20368c1e385f74\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-test-preload-20210816220706-6487_01e1f4e495c3311ccc20368c1e385f74/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/cf393510efb7d31fc99800e7a4023b830fcab0a2a0dcd9c0ce15481a2623c1db/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-test-preload-20210816220706-6487_kube-system_01e1f4e495c3311ccc20368c1e385f74_0","io.kubernetes.cr
i-o.ResolvPath":"/run/containers/storage/overlay-containers/1e64d6ee9479283ea365aeca2b5f7a63051c55edd64d18ddb83a155d0c5365f6/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"1e64d6ee9479283ea365aeca2b5f7a63051c55edd64d18ddb83a155d0c5365f6","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-test-preload-20210816220706-6487_kube-system_01e1f4e495c3311ccc20368c1e385f74_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/01e1f4e495c3311ccc20368c1e385f74/containers/kube-controller-manager/10c63bb0\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/01e1f4e495c3311ccc20368c1e385f74/etc-hosts\",\"readonly\":false},{\"container_path\":\"/et
c/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-test-preload-20210816220706-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"01e1f4e495c3311ccc20368c1e385f74","kubernetes.io/config.hash":"01e1f4e495c3311ccc20368c1e385f74",
"kubernetes.io/config.seen":"2021-08-16T22:08:04.618421332Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"38e217fbaedb05e1041e8c8df911440075e03603905ae5944baa7a43ef8a84e1","pid":3701,"status":"running","bundle":"/run/containers/storage/overlay-containers/38e217fbaedb05e1041e8c8df911440075e03603905ae5944baa7a43ef8a84e1/userdata","rootfs":"/var/lib/containers/storage/overlay/14dff0740da069a50d5b6a3f46eb55ee3a570f7e7f17fe0cb3bad9bc0e5acd1d/merged","created":"2021-08-16T22:08:28.840307238Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"c3683ad7","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"
c3683ad7\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"38e217fbaedb05e1041e8c8df911440075e03603905ae5944baa7a43ef8a84e1","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:08:28.762595343Z","io.kubernetes.cri-o.Image":"7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.17.0","io.kubernetes.cri-o.ImageRef":"7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-wj5tb\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"2dca3ccc-d899-4c38-ae23-c8c33768dd73\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube
-proxy-wj5tb_2dca3ccc-d899-4c38-ae23-c8c33768dd73/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/14dff0740da069a50d5b6a3f46eb55ee3a570f7e7f17fe0cb3bad9bc0e5acd1d/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-wj5tb_kube-system_2dca3ccc-d899-4c38-ae23-c8c33768dd73_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/de0ac02e877c59265044faf0365a4bee6208e4c6286d410b9b8765fd02f5826d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"de0ac02e877c59265044faf0365a4bee6208e4c6286d410b9b8765fd02f5826d","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-wj5tb_kube-system_2dca3ccc-d899-4c38-ae23-c8c33768dd73_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"reado
nly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/2dca3ccc-d899-4c38-ae23-c8c33768dd73/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/2dca3ccc-d899-4c38-ae23-c8c33768dd73/containers/kube-proxy/4966c678\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/2dca3ccc-d899-4c38-ae23-c8c33768dd73/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/2dca3ccc-d899-4c38-ae23-c8c33768dd73/volumes/kubernetes.io~secret/kube-proxy-token-4gt5t\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-wj5tb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"2dca3ccc-d899-4c38-ae23-c8c33768dd73","kubern
etes.io/config.seen":"2021-08-16T22:08:28.040142418Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"39a8ed9965e6f46fa89257d767c92e5f83fab7f89d367e9fc002109e249ce4f6","pid":3649,"status":"running","bundle":"/run/containers/storage/overlay-containers/39a8ed9965e6f46fa89257d767c92e5f83fab7f89d367e9fc002109e249ce4f6/userdata","rootfs":"/var/lib/containers/storage/overlay/a2324da639cbfe9f14ade62f7a2116eba0ad73d9d13d9d0b30e6b2c6ec7a7086/merged","created":"2021-08-16T22:08:28.611989714Z","annotations":{"app":"kindnet","controller-revision-hash":"59985d8787","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-16T22:08:28.044960323Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"39a8ed9965e6f46fa89257d767c92
e5f83fab7f89d367e9fc002109e249ce4f6","io.kubernetes.cri-o.ContainerName":"k8s_POD_kindnet-rqhqj_kube-system_0a087d54-6fdf-4ea4-90ad-61ef85ea5903_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:08:28.429377647Z","io.kubernetes.cri-o.HostName":"test-preload-20210816220706-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/39a8ed9965e6f46fa89257d767c92e5f83fab7f89d367e9fc002109e249ce4f6/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kindnet-rqhqj","io.kubernetes.cri-o.Labels":"{\"tier\":\"node\",\"pod-template-generation\":\"1\",\"k8s-app\":\"kindnet\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"0a087d54-6fdf-4ea4-90ad-61ef85ea5903\",\"io.kubernetes.pod.name\":\"kindnet-rqhqj\",\"controller-revision-hash\":\"59985d8787\",\"app\":\"kindnet\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogP
ath":"/var/log/pods/kube-system_kindnet-rqhqj_0a087d54-6fdf-4ea4-90ad-61ef85ea5903/39a8ed9965e6f46fa89257d767c92e5f83fab7f89d367e9fc002109e249ce4f6.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-rqhqj\",\"uid\":\"0a087d54-6fdf-4ea4-90ad-61ef85ea5903\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a2324da639cbfe9f14ade62f7a2116eba0ad73d9d13d9d0b30e6b2c6ec7a7086/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-rqhqj_kube-system_0a087d54-6fdf-4ea4-90ad-61ef85ea5903_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/39a8ed9965e6f46fa89257d767c92e5f83fab7f89d367e9fc002109e249ce4f6/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"39a8ed9965e6f46fa89257d767c92e5f83fab7f89d367e9fc002
109e249ce4f6","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/39a8ed9965e6f46fa89257d767c92e5f83fab7f89d367e9fc002109e249ce4f6/userdata/shm","io.kubernetes.pod.name":"kindnet-rqhqj","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"0a087d54-6fdf-4ea4-90ad-61ef85ea5903","k8s-app":"kindnet","kubernetes.io/config.seen":"2021-08-16T22:08:28.044960323Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1","tier":"node"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"48abd00c5cb4f29261fe2797b6e190acf5f077f552df63c2631725f7810d45c9","pid":4264,"status":"running","bundle":"/run/containers/storage/overlay-containers/48abd00c5cb4f29261fe2797b6e190acf5f077f552df63c2631725f7810d45c9/userdata","rootfs":"/var/lib/containers/storage/overlay/f38cca25345eb182eeeedfab7f0686673f9007e71871871f0bdf01d3391a64fd/merged","created":"2021-08-16T22:08:50.960134198Z","annotation
s":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-16T22:08:28.933672423Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"vethd9e49d47\",\"mac\":\"4a:e0:e9:9a:e3:a3\"},{\"name\":\"eth0\",\"mac\":\"12:92:46:65:b3:c4\",\"sandbox\":\"/var/run/netns/3a36d2f7-5378-4673-8ed1-3d878404cacc\"}],\"ips\":[{\"version\":\"4\",\"interface\":1,\"address\":\"10.244.0.2/24\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"48abd00c5cb4f29261fe2797b6e190acf5f077f552df63c2631725f7810d45c9","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-6955765f44-5cdbh_kube-system_255731e0-831a-4b73-b470-4fc360e4d9f7_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:08:50.830299037Z","io.kubernetes.cri-o.HostName":"co
redns-6955765f44-5cdbh","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/48abd00c5cb4f29261fe2797b6e190acf5f077f552df63c2631725f7810d45c9/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-6955765f44-5cdbh","io.kubernetes.cri-o.Labels":"{\"k8s-app\":\"kube-dns\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"255731e0-831a-4b73-b470-4fc360e4d9f7\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-6955765f44-5cdbh\",\"pod-template-hash\":\"6955765f44\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-6955765f44-5cdbh_255731e0-831a-4b73-b470-4fc360e4d9f7/48abd00c5cb4f29261fe2797b6e190acf5f077f552df63c2631725f7810d45c9.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-6955765f44-5cdbh\",\"uid\":\"255731e0-831a-4b73-b470-4fc360e4d9f7\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var
/lib/containers/storage/overlay/f38cca25345eb182eeeedfab7f0686673f9007e71871871f0bdf01d3391a64fd/merged","io.kubernetes.cri-o.Name":"k8s_coredns-6955765f44-5cdbh_kube-system_255731e0-831a-4b73-b470-4fc360e4d9f7_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/48abd00c5cb4f29261fe2797b6e190acf5f077f552df63c2631725f7810d45c9/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"48abd00c5cb4f29261fe2797b6e190acf5f077f552df63c2631725f7810d45c9","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/48abd00c5cb4f29261fe2797b6e190acf5f077f552df63c2631725f7810d45c9/userdata/shm","io.kubernetes.pod.name":"coredns-6955765f44-5cdbh","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"25
5731e0-831a-4b73-b470-4fc360e4d9f7","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-16T22:08:28.933672423Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"6955765f44"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"65aed2c6f8e06e2c095d5a1363a4a5e92b5e2932622e09e443e201b030c51756","pid":2767,"status":"running","bundle":"/run/containers/storage/overlay-containers/65aed2c6f8e06e2c095d5a1363a4a5e92b5e2932622e09e443e201b030c51756/userdata","rootfs":"/var/lib/containers/storage/overlay/9ff60893b0681a25c8c1f7f9269b43e0b7ac3847ccbcaef6c82f21d4f7e7cb80/merged","created":"2021-08-16T22:08:06.176123213Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ffc41559","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"
{\"io.kubernetes.container.hash\":\"ffc41559\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"65aed2c6f8e06e2c095d5a1363a4a5e92b5e2932622e09e443e201b030c51756","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:08:05.867801898Z","io.kubernetes.cri-o.Image":"0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.17.0","io.kubernetes.cri-o.ImageRef":"0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-test-preload-20210816220706-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"f8c1872d6958c845ffffb18f158
fd9df\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-test-preload-20210816220706-6487_f8c1872d6958c845ffffb18f158fd9df/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9ff60893b0681a25c8c1f7f9269b43e0b7ac3847ccbcaef6c82f21d4f7e7cb80/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-test-preload-20210816220706-6487_kube-system_f8c1872d6958c845ffffb18f158fd9df_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/9185aac4b4c007d2cac3ef21321ad38ababd4defca9a52282c2e6089dd23c32b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"9185aac4b4c007d2cac3ef21321ad38ababd4defca9a52282c2e6089dd23c32b","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-test-preload-20210816220706-6487_kube-system_f8c1872d6958c845ffffb18f158fd9df_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinO
nce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/f8c1872d6958c845ffffb18f158fd9df/containers/kube-apiserver/a6f42d4b\",\"readonly\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/f8c1872d6958c845ffffb18f158fd9df/etc-hosts\",\"readonly\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-test-preload-20210816220706-6487","io.kubernet
es.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"f8c1872d6958c845ffffb18f158fd9df","kubernetes.io/config.hash":"f8c1872d6958c845ffffb18f158fd9df","kubernetes.io/config.seen":"2021-08-16T22:08:04.618419504Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"66694634fd317233a47b80a64b79c06558b8aba0d05843bd39f5e1c122f36d5e","pid":2783,"status":"running","bundle":"/run/containers/storage/overlay-containers/66694634fd317233a47b80a64b79c06558b8aba0d05843bd39f5e1c122f36d5e/userdata","rootfs":"/var/lib/containers/storage/overlay/b7a41c920df524f4e7254b11b532bab96e00abf919cda856dba10c258c458f46/merged","created":"2021-08-16T22:08:06.176132043Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"24cd86cc","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.k
ubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"24cd86cc\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"66694634fd317233a47b80a64b79c06558b8aba0d05843bd39f5e1c122f36d5e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:08:05.868749342Z","io.kubernetes.cri-o.Image":"303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.3-0","io.kubernetes.cri-o.ImageRef":"303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-test-pre
load-20210816220706-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"df30e34a70f2b5de6ad91ee741a66802\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-test-preload-20210816220706-6487_df30e34a70f2b5de6ad91ee741a66802/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b7a41c920df524f4e7254b11b532bab96e00abf919cda856dba10c258c458f46/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-test-preload-20210816220706-6487_kube-system_df30e34a70f2b5de6ad91ee741a66802_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/021f018475a4b63c95f19e00400c1c1498dd61d453af3b9f51ac6528535bec54/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"021f018475a4b63c95f19e00400c1c1498dd61d453af3b9f51ac6528535bec54","io.kubernetes.cri-o.SandboxName":"k8s_etcd-test-preload-20210816220706-6487_kube-system_df30e34a70f2b5de6ad91ee741a66802_0","io.kubernetes.cri-o.SeccompProfilePa
th":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/df30e34a70f2b5de6ad91ee741a66802/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/df30e34a70f2b5de6ad91ee741a66802/containers/etcd/f377b7a2\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-test-preload-20210816220706-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"df30e34a70f2b5de6ad91ee741a66802","kubernetes.io/config.hash":"df30e34a70f2b5de6ad91ee741a66802","kubernetes.io/config.seen":"2021-08-16T22:08:04.618414145Z","kub
ernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7b3a19b753e8f4e415e41dbf57966f6d519de7ab9926d6d0dffd9215e629e3cf","pid":3803,"status":"running","bundle":"/run/containers/storage/overlay-containers/7b3a19b753e8f4e415e41dbf57966f6d519de7ab9926d6d0dffd9215e629e3cf/userdata","rootfs":"/var/lib/containers/storage/overlay/d499bc748266283659512ae554c2aab1863354198de56aa38c9b5b7ec3ce4396/merged","created":"2021-08-16T22:08:29.108153622Z","annotations":{"addonmanager.kubernetes.io/mode":"Reconcile","integration-test":"storage-provisioner","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"addonmanager.kubernetes.io/mode\\\":\\\"Reconcile\\\",\\\"integ
ration-test\\\":\\\"storage-provisioner\\\"},\\\"name\\\":\\\"storage-provisioner\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/storage-provisioner\\\"],\\\"image\\\":\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"storage-provisioner\\\",\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"storage-provisioner\\\",\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/tmp\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"tmp\\\"}]}}\\n\",\"kubernetes.io/config.seen\":\"2021-08-16T22:08:28.714001672Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"7b3a19b753e8f4e415e41dbf57966f6d519de7ab9926d6d0dffd9215e629e3cf","io.kubernetes.cri-o.ContainerName":"k8s_POD_storage-provisioner_kube-system_fae42d16-b932-4470-9cd3-c2111bdc755b_0","io.kubernet
es.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:08:29.029589588Z","io.kubernetes.cri-o.HostName":"test-preload-20210816220706-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/7b3a19b753e8f4e415e41dbf57966f6d519de7ab9926d6d0dffd9215e629e3cf/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"storage-provisioner","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"storage-provisioner\",\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\",\"io.kubernetes.pod.uid\":\"fae42d16-b932-4470-9cd3-c2111bdc755b\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_fae42d16-b932-4470-9cd3-c2111bdc755b/7b3a19b753e8f4e415e41dbf57966f6d519de7ab9926d6d0dffd9215e629e3cf.log","io.kubernetes.cri-o.Metadata":"{\
"name\":\"storage-provisioner\",\"uid\":\"fae42d16-b932-4470-9cd3-c2111bdc755b\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d499bc748266283659512ae554c2aab1863354198de56aa38c9b5b7ec3ce4396/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_kube-system_fae42d16-b932-4470-9cd3-c2111bdc755b_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/7b3a19b753e8f4e415e41dbf57966f6d519de7ab9926d6d0dffd9215e629e3cf/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"7b3a19b753e8f4e415e41dbf57966f6d519de7ab9926d6d0dffd9215e629e3cf","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/7b3a19b753e8f4e415e41dbf57966f6d519de7ab992
6d6d0dffd9215e629e3cf/userdata/shm","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"fae42d16-b932-4470-9cd3-c2111bdc755b","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-16T22:08:28.714001672Z","kubernetes.io/config.source":"api","org.systemd.property.
CollectMode":"'inactive-or-failed'"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9185aac4b4c007d2cac3ef21321ad38ababd4defca9a52282c2e6089dd23c32b","pid":2630,"status":"running","bundle":"/run/containers/storage/overlay-containers/9185aac4b4c007d2cac3ef21321ad38ababd4defca9a52282c2e6089dd23c32b/userdata","rootfs":"/var/lib/containers/storage/overlay/b2b50519c62f37a813698c9d100fad95c9b84b017a71e25ac869b321a16624ac/merged","created":"2021-08-16T22:08:05.756252839Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"f8c1872d6958c845ffffb18f158fd9df\",\"kubernetes.io/config.seen\":\"2021-08-16T22:08:04.618419504Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"9185aac4b4c007d2cac3ef21321ad38ababd4defca9a52282c2e6089dd23c32b","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-test-preload-20210816220706-648
7_kube-system_f8c1872d6958c845ffffb18f158fd9df_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:08:05.646642544Z","io.kubernetes.cri-o.HostName":"test-preload-20210816220706-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/9185aac4b4c007d2cac3ef21321ad38ababd4defca9a52282c2e6089dd23c32b/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-test-preload-20210816220706-6487","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"f8c1872d6958c845ffffb18f158fd9df\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-test-preload-20210816220706-6487\",\"tier\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-test-preload-20210816220706-6487_f8c1872d6958c845ffffb18f158fd9df/9
185aac4b4c007d2cac3ef21321ad38ababd4defca9a52282c2e6089dd23c32b.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-test-preload-20210816220706-6487\",\"uid\":\"f8c1872d6958c845ffffb18f158fd9df\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b2b50519c62f37a813698c9d100fad95c9b84b017a71e25ac869b321a16624ac/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-test-preload-20210816220706-6487_kube-system_f8c1872d6958c845ffffb18f158fd9df_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/9185aac4b4c007d2cac3ef21321ad38ababd4defca9a52282c2e6089dd23c32b/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"9185aac4b4c007d2cac3ef21321ad38ababd4defca9a52282c2e6089dd23c32b","io.kubern
etes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/9185aac4b4c007d2cac3ef21321ad38ababd4defca9a52282c2e6089dd23c32b/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-test-preload-20210816220706-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"f8c1872d6958c845ffffb18f158fd9df","kubernetes.io/config.hash":"f8c1872d6958c845ffffb18f158fd9df","kubernetes.io/config.seen":"2021-08-16T22:08:04.618419504Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9bea27bcbbbbffff5ba82ea900470db0d293e82e8e84b670c6953959bca82842","pid":2759,"status":"running","bundle":"/run/containers/storage/overlay-containers/9bea27bcbbbbffff5ba82ea900470db0d293e82e8e84b670c6953959bca82842/userdata","rootfs":"/var/lib/containers/storage/overlay/3c2a9abf1a0caabcf77c1733fd17a6677a29b685362f29308bc0fc47ce270d7f/merged","created":"2021-08-16T22:
08:06.176144737Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"99930feb","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"99930feb\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"9bea27bcbbbbffff5ba82ea900470db0d293e82e8e84b670c6953959bca82842","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:08:05.857840276Z","io.kubernetes.cri-o.Image":"78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.17.0","
io.kubernetes.cri-o.ImageRef":"78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-test-preload-20210816220706-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"bb577061a17ad23cfbbf52e9419bf32a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-test-preload-20210816220706-6487_bb577061a17ad23cfbbf52e9419bf32a/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3c2a9abf1a0caabcf77c1733fd17a6677a29b685362f29308bc0fc47ce270d7f/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-test-preload-20210816220706-6487_kube-system_bb577061a17ad23cfbbf52e9419bf32a_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/b04a18187019f6d7e17187fce40893876d20e040a7273594fc62053434c76123/u
serdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"b04a18187019f6d7e17187fce40893876d20e040a7273594fc62053434c76123","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-test-preload-20210816220706-6487_kube-system_bb577061a17ad23cfbbf52e9419bf32a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/bb577061a17ad23cfbbf52e9419bf32a/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/bb577061a17ad23cfbbf52e9419bf32a/containers/kube-scheduler/d4212ecb\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-test-preload-20210816220706-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminati
onGracePeriod":"30","io.kubernetes.pod.uid":"bb577061a17ad23cfbbf52e9419bf32a","kubernetes.io/config.hash":"bb577061a17ad23cfbbf52e9419bf32a","kubernetes.io/config.seen":"2021-08-16T22:08:04.618422461Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ab6cb51a35acc62a44f30f0de47168b66e4f1380d174f9b401da2d7133d33e1a","pid":3991,"status":"running","bundle":"/run/containers/storage/overlay-containers/ab6cb51a35acc62a44f30f0de47168b66e4f1380d174f9b401da2d7133d33e1a/userdata","rootfs":"/var/lib/containers/storage/overlay/a281bf04b251605867fdc313a648ee8141b30ce2345f574dfc05cd9320944070/merged","created":"2021-08-16T22:08:34.396057854Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"5f14770a","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/ter
mination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"5f14770a\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ab6cb51a35acc62a44f30f0de47168b66e4f1380d174f9b401da2d7133d33e1a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:08:34.281090181Z","io.kubernetes.cri-o.Image":"docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20210326-1e038dc5","io.kubernetes.cri-o.ImageRef":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kin
dnet-rqhqj\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"0a087d54-6fdf-4ea4-90ad-61ef85ea5903\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-rqhqj_0a087d54-6fdf-4ea4-90ad-61ef85ea5903/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a281bf04b251605867fdc313a648ee8141b30ce2345f574dfc05cd9320944070/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-rqhqj_kube-system_0a087d54-6fdf-4ea4-90ad-61ef85ea5903_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/39a8ed9965e6f46fa89257d767c92e5f83fab7f89d367e9fc002109e249ce4f6/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"39a8ed9965e6f46fa89257d767c92e5f83fab7f89d367e9fc002109e249ce4f6","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-rqhqj_kube-system_0a087d54-6fdf-4ea4-90ad-61ef85ea5903_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.ku
bernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/0a087d54-6fdf-4ea4-90ad-61ef85ea5903/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/0a087d54-6fdf-4ea4-90ad-61ef85ea5903/containers/kindnet-cni/5014cb9f\",\"readonly\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/0a087d54-6fdf-4ea4-90ad-61ef85ea5903/volumes/kubernetes.io~secret/kindnet-token-6857q\",\"readonly\":true}]","io.kubernetes.pod.name":"kindnet-rqhqj","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminatio
nGracePeriod":"30","io.kubernetes.pod.uid":"0a087d54-6fdf-4ea4-90ad-61ef85ea5903","kubernetes.io/config.seen":"2021-08-16T22:08:28.044960323Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b04a18187019f6d7e17187fce40893876d20e040a7273594fc62053434c76123","pid":2638,"status":"running","bundle":"/run/containers/storage/overlay-containers/b04a18187019f6d7e17187fce40893876d20e040a7273594fc62053434c76123/userdata","rootfs":"/var/lib/containers/storage/overlay/c10205053a7e8dd99d56099cc8593786f15f88e1d17cef1ced259cc0d5c7209f/merged","created":"2021-08-16T22:08:05.75625891Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-16T22:08:04.618422461Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"bb577061a1
7ad23cfbbf52e9419bf32a\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"b04a18187019f6d7e17187fce40893876d20e040a7273594fc62053434c76123","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-test-preload-20210816220706-6487_kube-system_bb577061a17ad23cfbbf52e9419bf32a_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:08:05.644373035Z","io.kubernetes.cri-o.HostName":"test-preload-20210816220706-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/b04a18187019f6d7e17187fce40893876d20e040a7273594fc62053434c76123/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-scheduler-test-preload-20210816220706-6487","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"bb577061a17ad23cfbbf52e9419bf32a\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-test-preload-2021081
6220706-6487\",\"tier\":\"control-plane\",\"component\":\"kube-scheduler\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-test-preload-20210816220706-6487_bb577061a17ad23cfbbf52e9419bf32a/b04a18187019f6d7e17187fce40893876d20e040a7273594fc62053434c76123.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-test-preload-20210816220706-6487\",\"uid\":\"bb577061a17ad23cfbbf52e9419bf32a\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c10205053a7e8dd99d56099cc8593786f15f88e1d17cef1ced259cc0d5c7209f/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-test-preload-20210816220706-6487_kube-system_bb577061a17ad23cfbbf52e9419bf32a_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storag
e/overlay-containers/b04a18187019f6d7e17187fce40893876d20e040a7273594fc62053434c76123/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"b04a18187019f6d7e17187fce40893876d20e040a7273594fc62053434c76123","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/b04a18187019f6d7e17187fce40893876d20e040a7273594fc62053434c76123/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-test-preload-20210816220706-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"bb577061a17ad23cfbbf52e9419bf32a","kubernetes.io/config.hash":"bb577061a17ad23cfbbf52e9419bf32a","kubernetes.io/config.seen":"2021-08-16T22:08:04.618422461Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"de0ac02e877c59265044faf0365a4bee6208e4c6286d410b9b8765fd02f5826d","pid":3656,"status":"running","bundle":"/run/
containers/storage/overlay-containers/de0ac02e877c59265044faf0365a4bee6208e4c6286d410b9b8765fd02f5826d/userdata","rootfs":"/var/lib/containers/storage/overlay/d3e2cb207d7aadfc602746cd4e23001b2579f1754c2e4c8e85a1e37047c8e6ca/merged","created":"2021-08-16T22:08:28.712457397Z","annotations":{"controller-revision-hash":"68bd87b66","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-16T22:08:28.040142418Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"de0ac02e877c59265044faf0365a4bee6208e4c6286d410b9b8765fd02f5826d","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-wj5tb_kube-system_2dca3ccc-d899-4c38-ae23-c8c33768dd73_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:08:28.426373208Z","io.kubernetes.cri-o.HostName":"test-preload-20210816220706-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.
cri-o.HostnamePath":"/run/containers/storage/overlay-containers/de0ac02e877c59265044faf0365a4bee6208e4c6286d410b9b8765fd02f5826d/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-wj5tb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"controller-revision-hash\":\"68bd87b66\",\"io.kubernetes.pod.uid\":\"2dca3ccc-d899-4c38-ae23-c8c33768dd73\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-proxy-wj5tb\",\"pod-template-generation\":\"1\",\"k8s-app\":\"kube-proxy\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-wj5tb_2dca3ccc-d899-4c38-ae23-c8c33768dd73/de0ac02e877c59265044faf0365a4bee6208e4c6286d410b9b8765fd02f5826d.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-wj5tb\",\"uid\":\"2dca3ccc-d899-4c38-ae23-c8c33768dd73\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d3e2cb207d7aadfc602746cd4e23001b2579f17
54c2e4c8e85a1e37047c8e6ca/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-wj5tb_kube-system_2dca3ccc-d899-4c38-ae23-c8c33768dd73_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/de0ac02e877c59265044faf0365a4bee6208e4c6286d410b9b8765fd02f5826d/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"de0ac02e877c59265044faf0365a4bee6208e4c6286d410b9b8765fd02f5826d","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/de0ac02e877c59265044faf0365a4bee6208e4c6286d410b9b8765fd02f5826d/userdata/shm","io.kubernetes.pod.name":"kube-proxy-wj5tb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"2dca3ccc-d899-4c38-ae23-c8c33768dd73","k8s-app":"kube-proxy","kubernetes.io/
config.seen":"2021-08-16T22:08:28.040142418Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"febcfcb711ef632bf1215fc8187796fa35fadbb6d235db09a9d2951e0b27c44a","pid":4296,"status":"running","bundle":"/run/containers/storage/overlay-containers/febcfcb711ef632bf1215fc8187796fa35fadbb6d235db09a9d2951e0b27c44a/userdata","rootfs":"/var/lib/containers/storage/overlay/624eb45649622c3ebc3eb104573ce08f30f8f984def75554177080d5913b8fb4/merged","created":"2021-08-16T22:08:51.10407749Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"cb851c05","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.cont
ainer.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"cb851c05\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"febcfcb711ef632bf1215fc8187796fa35fadbb6d235db09a9d2951e0b27c44a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:08:50.997270752Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"70f311871ae12c14bd
0e02028f249f933f925e4370744e4e35f706da773a8f61","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns:1.6.5","io.kubernetes.cri-o.ImageRef":"70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-6955765f44-5cdbh\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"255731e0-831a-4b73-b470-4fc360e4d9f7\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-6955765f44-5cdbh_255731e0-831a-4b73-b470-4fc360e4d9f7/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/624eb45649622c3ebc3eb104573ce08f30f8f984def75554177080d5913b8fb4/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-6955765f44-5cdbh_kube-system_255731e0-831a-4b73-b470-4fc360e4d9f7_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/48abd00c5cb4f29261fe2797b6e190acf5f077f552d
f63c2631725f7810d45c9/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"48abd00c5cb4f29261fe2797b6e190acf5f077f552df63c2631725f7810d45c9","io.kubernetes.cri-o.SandboxName":"k8s_coredns-6955765f44-5cdbh_kube-system_255731e0-831a-4b73-b470-4fc360e4d9f7_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/255731e0-831a-4b73-b470-4fc360e4d9f7/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/255731e0-831a-4b73-b470-4fc360e4d9f7/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/255731e0-831a-4b73-b470-4fc360e4d9f7/containers/coredns/30f1d277\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/v
ar/lib/kubelet/pods/255731e0-831a-4b73-b470-4fc360e4d9f7/volumes/kubernetes.io~secret/coredns-token-hx4gh\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-6955765f44-5cdbh","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"255731e0-831a-4b73-b470-4fc360e4d9f7","kubernetes.io/config.seen":"2021-08-16T22:08:28.933672423Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0816 22:09:16.267581  135006 cri.go:113] list returned 16 containers
	I0816 22:09:16.267595  135006 cri.go:116] container: {ID:021f018475a4b63c95f19e00400c1c1498dd61d453af3b9f51ac6528535bec54 Status:running}
	I0816 22:09:16.267605  135006 cri.go:118] skipping 021f018475a4b63c95f19e00400c1c1498dd61d453af3b9f51ac6528535bec54 - not in ps
	I0816 22:09:16.267609  135006 cri.go:116] container: {ID:1da8ca16721bb36d63748e336577a0fd07425325eb6ade463a8c4fb9e4debdd7 Status:running}
	I0816 22:09:16.267614  135006 cri.go:122] skipping {1da8ca16721bb36d63748e336577a0fd07425325eb6ade463a8c4fb9e4debdd7 running}: state = "running", want "paused"
	I0816 22:09:16.267635  135006 cri.go:116] container: {ID:1e64d6ee9479283ea365aeca2b5f7a63051c55edd64d18ddb83a155d0c5365f6 Status:running}
	I0816 22:09:16.267640  135006 cri.go:118] skipping 1e64d6ee9479283ea365aeca2b5f7a63051c55edd64d18ddb83a155d0c5365f6 - not in ps
	I0816 22:09:16.267644  135006 cri.go:116] container: {ID:3092b31ecd6bb4312bed487c67745381aef9a261121af23448b73b4d2aebf293 Status:running}
	I0816 22:09:16.267651  135006 cri.go:122] skipping {3092b31ecd6bb4312bed487c67745381aef9a261121af23448b73b4d2aebf293 running}: state = "running", want "paused"
	I0816 22:09:16.267659  135006 cri.go:116] container: {ID:38e217fbaedb05e1041e8c8df911440075e03603905ae5944baa7a43ef8a84e1 Status:running}
	I0816 22:09:16.267666  135006 cri.go:122] skipping {38e217fbaedb05e1041e8c8df911440075e03603905ae5944baa7a43ef8a84e1 running}: state = "running", want "paused"
	I0816 22:09:16.267670  135006 cri.go:116] container: {ID:39a8ed9965e6f46fa89257d767c92e5f83fab7f89d367e9fc002109e249ce4f6 Status:running}
	I0816 22:09:16.267678  135006 cri.go:118] skipping 39a8ed9965e6f46fa89257d767c92e5f83fab7f89d367e9fc002109e249ce4f6 - not in ps
	I0816 22:09:16.267682  135006 cri.go:116] container: {ID:48abd00c5cb4f29261fe2797b6e190acf5f077f552df63c2631725f7810d45c9 Status:running}
	I0816 22:09:16.267689  135006 cri.go:118] skipping 48abd00c5cb4f29261fe2797b6e190acf5f077f552df63c2631725f7810d45c9 - not in ps
	I0816 22:09:16.267692  135006 cri.go:116] container: {ID:65aed2c6f8e06e2c095d5a1363a4a5e92b5e2932622e09e443e201b030c51756 Status:running}
	I0816 22:09:16.267701  135006 cri.go:122] skipping {65aed2c6f8e06e2c095d5a1363a4a5e92b5e2932622e09e443e201b030c51756 running}: state = "running", want "paused"
	I0816 22:09:16.267706  135006 cri.go:116] container: {ID:66694634fd317233a47b80a64b79c06558b8aba0d05843bd39f5e1c122f36d5e Status:running}
	I0816 22:09:16.267713  135006 cri.go:122] skipping {66694634fd317233a47b80a64b79c06558b8aba0d05843bd39f5e1c122f36d5e running}: state = "running", want "paused"
	I0816 22:09:16.267718  135006 cri.go:116] container: {ID:7b3a19b753e8f4e415e41dbf57966f6d519de7ab9926d6d0dffd9215e629e3cf Status:running}
	I0816 22:09:16.267725  135006 cri.go:118] skipping 7b3a19b753e8f4e415e41dbf57966f6d519de7ab9926d6d0dffd9215e629e3cf - not in ps
	I0816 22:09:16.267731  135006 cri.go:116] container: {ID:9185aac4b4c007d2cac3ef21321ad38ababd4defca9a52282c2e6089dd23c32b Status:running}
	I0816 22:09:16.267735  135006 cri.go:118] skipping 9185aac4b4c007d2cac3ef21321ad38ababd4defca9a52282c2e6089dd23c32b - not in ps
	I0816 22:09:16.267742  135006 cri.go:116] container: {ID:9bea27bcbbbbffff5ba82ea900470db0d293e82e8e84b670c6953959bca82842 Status:running}
	I0816 22:09:16.267746  135006 cri.go:122] skipping {9bea27bcbbbbffff5ba82ea900470db0d293e82e8e84b670c6953959bca82842 running}: state = "running", want "paused"
	I0816 22:09:16.267752  135006 cri.go:116] container: {ID:ab6cb51a35acc62a44f30f0de47168b66e4f1380d174f9b401da2d7133d33e1a Status:running}
	I0816 22:09:16.267757  135006 cri.go:122] skipping {ab6cb51a35acc62a44f30f0de47168b66e4f1380d174f9b401da2d7133d33e1a running}: state = "running", want "paused"
	I0816 22:09:16.267763  135006 cri.go:116] container: {ID:b04a18187019f6d7e17187fce40893876d20e040a7273594fc62053434c76123 Status:running}
	I0816 22:09:16.267768  135006 cri.go:118] skipping b04a18187019f6d7e17187fce40893876d20e040a7273594fc62053434c76123 - not in ps
	I0816 22:09:16.267774  135006 cri.go:116] container: {ID:de0ac02e877c59265044faf0365a4bee6208e4c6286d410b9b8765fd02f5826d Status:running}
	I0816 22:09:16.267781  135006 cri.go:118] skipping de0ac02e877c59265044faf0365a4bee6208e4c6286d410b9b8765fd02f5826d - not in ps
	I0816 22:09:16.267787  135006 cri.go:116] container: {ID:febcfcb711ef632bf1215fc8187796fa35fadbb6d235db09a9d2951e0b27c44a Status:running}
	I0816 22:09:16.267791  135006 cri.go:122] skipping {febcfcb711ef632bf1215fc8187796fa35fadbb6d235db09a9d2951e0b27c44a running}: state = "running", want "paused"
	I0816 22:09:16.267825  135006 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:09:16.274423  135006 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0816 22:09:16.274443  135006 kubeadm.go:600] restartCluster start
	I0816 22:09:16.274482  135006 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0816 22:09:16.280463  135006 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:09:16.281137  135006 kubeconfig.go:93] found "test-preload-20210816220706-6487" server: "https://192.168.49.2:8443"
	I0816 22:09:16.281528  135006 kapi.go:59] client config for test-preload-20210816220706-6487: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/test-preload-20210816220706-6487/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/test-preload-2021081622
0706-6487/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e3460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 22:09:16.282912  135006 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 22:09:16.288897  135006 kubeadm.go:568] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2021-08-16 22:08:00.662048419 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2021-08-16 22:09:15.959454773 +0000
	@@ -40,7 +40,7 @@
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	       proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.17.0
	+kubernetesVersion: v1.17.3
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0816 22:09:16.288913  135006 kubeadm.go:1032] stopping kube-system containers ...
	I0816 22:09:16.288925  135006 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 22:09:16.288961  135006 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:09:16.311348  135006 cri.go:76] found id: "febcfcb711ef632bf1215fc8187796fa35fadbb6d235db09a9d2951e0b27c44a"
	I0816 22:09:16.311383  135006 cri.go:76] found id: "ab6cb51a35acc62a44f30f0de47168b66e4f1380d174f9b401da2d7133d33e1a"
	I0816 22:09:16.311388  135006 cri.go:76] found id: "1da8ca16721bb36d63748e336577a0fd07425325eb6ade463a8c4fb9e4debdd7"
	I0816 22:09:16.311392  135006 cri.go:76] found id: "38e217fbaedb05e1041e8c8df911440075e03603905ae5944baa7a43ef8a84e1"
	I0816 22:09:16.311395  135006 cri.go:76] found id: "9bea27bcbbbbffff5ba82ea900470db0d293e82e8e84b670c6953959bca82842"
	I0816 22:09:16.311399  135006 cri.go:76] found id: "66694634fd317233a47b80a64b79c06558b8aba0d05843bd39f5e1c122f36d5e"
	I0816 22:09:16.311402  135006 cri.go:76] found id: "65aed2c6f8e06e2c095d5a1363a4a5e92b5e2932622e09e443e201b030c51756"
	I0816 22:09:16.311406  135006 cri.go:76] found id: "3092b31ecd6bb4312bed487c67745381aef9a261121af23448b73b4d2aebf293"
	I0816 22:09:16.311409  135006 cri.go:76] found id: ""
	I0816 22:09:16.311414  135006 cri.go:221] Stopping containers: [febcfcb711ef632bf1215fc8187796fa35fadbb6d235db09a9d2951e0b27c44a ab6cb51a35acc62a44f30f0de47168b66e4f1380d174f9b401da2d7133d33e1a 1da8ca16721bb36d63748e336577a0fd07425325eb6ade463a8c4fb9e4debdd7 38e217fbaedb05e1041e8c8df911440075e03603905ae5944baa7a43ef8a84e1 9bea27bcbbbbffff5ba82ea900470db0d293e82e8e84b670c6953959bca82842 66694634fd317233a47b80a64b79c06558b8aba0d05843bd39f5e1c122f36d5e 65aed2c6f8e06e2c095d5a1363a4a5e92b5e2932622e09e443e201b030c51756 3092b31ecd6bb4312bed487c67745381aef9a261121af23448b73b4d2aebf293]
	I0816 22:09:16.311451  135006 ssh_runner.go:149] Run: which crictl
	I0816 22:09:16.314077  135006 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop febcfcb711ef632bf1215fc8187796fa35fadbb6d235db09a9d2951e0b27c44a ab6cb51a35acc62a44f30f0de47168b66e4f1380d174f9b401da2d7133d33e1a 1da8ca16721bb36d63748e336577a0fd07425325eb6ade463a8c4fb9e4debdd7 38e217fbaedb05e1041e8c8df911440075e03603905ae5944baa7a43ef8a84e1 9bea27bcbbbbffff5ba82ea900470db0d293e82e8e84b670c6953959bca82842 66694634fd317233a47b80a64b79c06558b8aba0d05843bd39f5e1c122f36d5e 65aed2c6f8e06e2c095d5a1363a4a5e92b5e2932622e09e443e201b030c51756 3092b31ecd6bb4312bed487c67745381aef9a261121af23448b73b4d2aebf293
	I0816 22:09:17.656814  135006 ssh_runner.go:189] Completed: sudo /usr/bin/crictl stop febcfcb711ef632bf1215fc8187796fa35fadbb6d235db09a9d2951e0b27c44a ab6cb51a35acc62a44f30f0de47168b66e4f1380d174f9b401da2d7133d33e1a 1da8ca16721bb36d63748e336577a0fd07425325eb6ade463a8c4fb9e4debdd7 38e217fbaedb05e1041e8c8df911440075e03603905ae5944baa7a43ef8a84e1 9bea27bcbbbbffff5ba82ea900470db0d293e82e8e84b670c6953959bca82842 66694634fd317233a47b80a64b79c06558b8aba0d05843bd39f5e1c122f36d5e 65aed2c6f8e06e2c095d5a1363a4a5e92b5e2932622e09e443e201b030c51756 3092b31ecd6bb4312bed487c67745381aef9a261121af23448b73b4d2aebf293: (1.342694911s)
	I0816 22:09:17.656880  135006 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0816 22:09:17.665954  135006 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:09:17.672540  135006 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5611 Aug 16 22:08 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5647 Aug 16 22:08 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Aug 16 22:08 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5599 Aug 16 22:08 /etc/kubernetes/scheduler.conf
	
	I0816 22:09:17.672595  135006 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 22:09:17.678754  135006 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 22:09:17.685037  135006 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 22:09:17.690958  135006 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 22:09:17.697008  135006 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:09:17.703103  135006 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0816 22:09:17.703122  135006 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:09:17.748109  135006 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:09:18.592648  135006 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:09:18.740610  135006 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:09:18.798775  135006 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:09:18.925477  135006 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:09:18.925537  135006 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:09:19.445260  135006 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:09:19.945217  135006 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:09:19.965827  135006 api_server.go:70] duration metric: took 1.040349904s to wait for apiserver process to appear ...
	I0816 22:09:19.965854  135006 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:09:19.965866  135006 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0816 22:09:19.966282  135006 api_server.go:255] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection refused
	I0816 22:09:20.466558  135006 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0816 22:09:23.991742  135006 api_server.go:265] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:09:23.991785  135006 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:09:24.467379  135006 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0816 22:09:24.471363  135006 api_server.go:265] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:09:24.471381  135006 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:09:24.966416  135006 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0816 22:09:24.972010  135006 api_server.go:265] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:09:24.972036  135006 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:09:25.467302  135006 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0816 22:09:25.471436  135006 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0816 22:09:25.477033  135006 api_server.go:139] control plane version: v1.17.3
	I0816 22:09:25.477052  135006 api_server.go:129] duration metric: took 5.511193029s to wait for apiserver health ...
	I0816 22:09:25.477061  135006 cni.go:93] Creating CNI manager for ""
	I0816 22:09:25.477066  135006 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:09:25.479172  135006 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 22:09:25.479222  135006 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0816 22:09:25.482731  135006 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.17.3/kubectl ...
	I0816 22:09:25.482746  135006 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0816 22:09:25.494998  135006 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.17.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 22:09:25.669864  135006 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:09:25.677412  135006 system_pods.go:59] 8 kube-system pods found
	I0816 22:09:25.677438  135006 system_pods.go:61] "coredns-6955765f44-5cdbh" [255731e0-831a-4b73-b470-4fc360e4d9f7] Running
	I0816 22:09:25.677444  135006 system_pods.go:61] "etcd-test-preload-20210816220706-6487" [11378e4f-5b9a-41ad-b820-996b630a1424] Running
	I0816 22:09:25.677447  135006 system_pods.go:61] "kindnet-rqhqj" [0a087d54-6fdf-4ea4-90ad-61ef85ea5903] Running
	I0816 22:09:25.677451  135006 system_pods.go:61] "kube-apiserver-test-preload-20210816220706-6487" [a00ab7e5-e0a3-4025-844f-2620dc529dd0] Running
	I0816 22:09:25.677455  135006 system_pods.go:61] "kube-controller-manager-test-preload-20210816220706-6487" [17b8eeb3-ca87-49f7-a747-f2df66a1ec83] Running
	I0816 22:09:25.677458  135006 system_pods.go:61] "kube-proxy-wj5tb" [2dca3ccc-d899-4c38-ae23-c8c33768dd73] Running
	I0816 22:09:25.677462  135006 system_pods.go:61] "kube-scheduler-test-preload-20210816220706-6487" [b5a0fab9-5c04-4891-b30c-a1130c13c9bd] Running
	I0816 22:09:25.677465  135006 system_pods.go:61] "storage-provisioner" [fae42d16-b932-4470-9cd3-c2111bdc755b] Running
	I0816 22:09:25.677470  135006 system_pods.go:74] duration metric: took 7.586019ms to wait for pod list to return data ...
	I0816 22:09:25.677507  135006 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:09:25.680157  135006 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:09:25.680179  135006 node_conditions.go:123] node cpu capacity is 8
	I0816 22:09:25.680189  135006 node_conditions.go:105] duration metric: took 2.678362ms to run NodePressure ...
	I0816 22:09:25.680202  135006 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:09:25.823791  135006 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0816 22:09:25.826477  135006 retry.go:31] will retry after 276.165072ms: kubelet not initialised
	I0816 22:09:26.106691  135006 retry.go:31] will retry after 540.190908ms: kubelet not initialised
	I0816 22:09:26.650069  135006 retry.go:31] will retry after 655.06503ms: kubelet not initialised
	I0816 22:09:27.309250  135006 retry.go:31] will retry after 791.196345ms: kubelet not initialised
	I0816 22:09:28.104058  135006 retry.go:31] will retry after 1.170244332s: kubelet not initialised
	I0816 22:09:29.277508  135006 retry.go:31] will retry after 2.253109428s: kubelet not initialised
	I0816 22:09:31.533969  135006 retry.go:31] will retry after 1.610739793s: kubelet not initialised
	I0816 22:09:33.147936  135006 retry.go:31] will retry after 2.804311738s: kubelet not initialised
	I0816 22:09:35.956192  135006 retry.go:31] will retry after 3.824918958s: kubelet not initialised
	I0816 22:09:39.784600  135006 retry.go:31] will retry after 7.69743562s: kubelet not initialised
	I0816 22:09:47.485404  135006 retry.go:31] will retry after 14.635568968s: kubelet not initialised
	I0816 22:10:02.124983  135006 kubeadm.go:746] kubelet initialised
	I0816 22:10:02.125006  135006 kubeadm.go:747] duration metric: took 36.30119268s waiting for restarted kubelet to initialise ...
	I0816 22:10:02.125013  135006 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:10:02.128798  135006 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6955765f44-48hr7" in "kube-system" namespace to be "Ready" ...
	I0816 22:10:02.135529  135006 pod_ready.go:92] pod "coredns-6955765f44-48hr7" in "kube-system" namespace has status "Ready":"True"
	I0816 22:10:02.135548  135006 pod_ready.go:81] duration metric: took 6.723046ms waiting for pod "coredns-6955765f44-48hr7" in "kube-system" namespace to be "Ready" ...
	I0816 22:10:02.135555  135006 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6955765f44-5cdbh" in "kube-system" namespace to be "Ready" ...
	I0816 22:10:02.138614  135006 pod_ready.go:92] pod "coredns-6955765f44-5cdbh" in "kube-system" namespace has status "Ready":"True"
	I0816 22:10:02.138627  135006 pod_ready.go:81] duration metric: took 3.06598ms waiting for pod "coredns-6955765f44-5cdbh" in "kube-system" namespace to be "Ready" ...
	I0816 22:10:02.138634  135006 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-20210816220706-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:10:02.144387  135006 pod_ready.go:92] pod "etcd-test-preload-20210816220706-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:10:02.144403  135006 pod_ready.go:81] duration metric: took 5.762722ms waiting for pod "etcd-test-preload-20210816220706-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:10:02.144417  135006 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-20210816220706-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:10:02.147411  135006 pod_ready.go:92] pod "kube-apiserver-test-preload-20210816220706-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:10:02.147427  135006 pod_ready.go:81] duration metric: took 3.002149ms waiting for pod "kube-apiserver-test-preload-20210816220706-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:10:02.147437  135006 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-20210816220706-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:10:02.524811  135006 pod_ready.go:92] pod "kube-controller-manager-test-preload-20210816220706-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:10:02.524839  135006 pod_ready.go:81] duration metric: took 377.391635ms waiting for pod "kube-controller-manager-test-preload-20210816220706-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:10:02.524854  135006 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-n6mvk" in "kube-system" namespace to be "Ready" ...
	I0816 22:10:02.925296  135006 pod_ready.go:92] pod "kube-proxy-n6mvk" in "kube-system" namespace has status "Ready":"True"
	I0816 22:10:02.925322  135006 pod_ready.go:81] duration metric: took 400.458962ms waiting for pod "kube-proxy-n6mvk" in "kube-system" namespace to be "Ready" ...
	I0816 22:10:02.925338  135006 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-20210816220706-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:10:03.324727  135006 pod_ready.go:92] pod "kube-scheduler-test-preload-20210816220706-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:10:03.324749  135006 pod_ready.go:81] duration metric: took 399.401127ms waiting for pod "kube-scheduler-test-preload-20210816220706-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:10:03.324760  135006 pod_ready.go:38] duration metric: took 1.199738119s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:10:03.324781  135006 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:10:03.343876  135006 ops.go:34] apiserver oom_adj: -16
	I0816 22:10:03.343894  135006 kubeadm.go:604] restartCluster took 47.069446644s
	I0816 22:10:03.343922  135006 kubeadm.go:392] StartCluster complete in 47.136571865s
	I0816 22:10:03.343942  135006 settings.go:142] acquiring lock: {Name:mk71c9e00e0a208f5191d6b85d29a074b46503a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:10:03.344030  135006 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:10:03.344679  135006 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:10:03.345261  135006 kapi.go:59] client config for test-preload-20210816220706-6487: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/test-preload-20210816220706-6487/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/test-preload-2021081622
0706-6487/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e3460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 22:10:03.854715  135006 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "test-preload-20210816220706-6487" rescaled to 1
	I0816 22:10:03.854766  135006 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.17.3 ControlPlane:true Worker:true}
	I0816 22:10:03.857052  135006 out.go:177] * Verifying Kubernetes components...
	I0816 22:10:03.854815  135006 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:10:03.857101  135006 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:10:03.854835  135006 addons.go:342] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I0816 22:10:03.857179  135006 addons.go:59] Setting storage-provisioner=true in profile "test-preload-20210816220706-6487"
	I0816 22:10:03.857199  135006 addons.go:135] Setting addon storage-provisioner=true in "test-preload-20210816220706-6487"
	I0816 22:10:03.854986  135006 config.go:177] Loaded profile config "test-preload-20210816220706-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.17.3
	I0816 22:10:03.857215  135006 addons.go:59] Setting default-storageclass=true in profile "test-preload-20210816220706-6487"
	I0816 22:10:03.857227  135006 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-20210816220706-6487"
	W0816 22:10:03.857206  135006 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:10:03.857345  135006 host.go:66] Checking if "test-preload-20210816220706-6487" exists ...
	I0816 22:10:03.857524  135006 cli_runner.go:115] Run: docker container inspect test-preload-20210816220706-6487 --format={{.State.Status}}
	I0816 22:10:03.857822  135006 cli_runner.go:115] Run: docker container inspect test-preload-20210816220706-6487 --format={{.State.Status}}
	I0816 22:10:03.908376  135006 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:10:03.906854  135006 kapi.go:59] client config for test-preload-20210816220706-6487: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/test-preload-20210816220706-6487/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/test-preload-2021081622
0706-6487/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e3460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 22:10:03.908509  135006 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:10:03.908525  135006 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:10:03.908584  135006 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210816220706-6487
	I0816 22:10:03.914922  135006 addons.go:135] Setting addon default-storageclass=true in "test-preload-20210816220706-6487"
	W0816 22:10:03.914945  135006 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:10:03.914966  135006 host.go:66] Checking if "test-preload-20210816220706-6487" exists ...
	I0816 22:10:03.915294  135006 cli_runner.go:115] Run: docker container inspect test-preload-20210816220706-6487 --format={{.State.Status}}
	I0816 22:10:03.932552  135006 node_ready.go:35] waiting up to 6m0s for node "test-preload-20210816220706-6487" to be "Ready" ...
	I0816 22:10:03.932860  135006 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0816 22:10:03.934727  135006 node_ready.go:49] node "test-preload-20210816220706-6487" has status "Ready":"True"
	I0816 22:10:03.934746  135006 node_ready.go:38] duration metric: took 2.157162ms waiting for node "test-preload-20210816220706-6487" to be "Ready" ...
	I0816 22:10:03.934756  135006 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:10:03.938490  135006 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6955765f44-48hr7" in "kube-system" namespace to be "Ready" ...
	I0816 22:10:03.951601  135006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/test-preload-20210816220706-6487/id_rsa Username:docker}
	I0816 22:10:03.956147  135006 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:10:03.956167  135006 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:10:03.956209  135006 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210816220706-6487
	I0816 22:10:03.992533  135006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/test-preload-20210816220706-6487/id_rsa Username:docker}
	I0816 22:10:04.043879  135006 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.17.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:10:04.083750  135006 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.17.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:10:04.125062  135006 pod_ready.go:92] pod "coredns-6955765f44-48hr7" in "kube-system" namespace has status "Ready":"True"
	I0816 22:10:04.125084  135006 pod_ready.go:81] duration metric: took 186.568553ms waiting for pod "coredns-6955765f44-48hr7" in "kube-system" namespace to be "Ready" ...
	I0816 22:10:04.125095  135006 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6955765f44-5cdbh" in "kube-system" namespace to be "Ready" ...
	I0816 22:10:04.239886  135006 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0816 22:10:04.239920  135006 addons.go:344] enableAddons completed in 385.089837ms
	I0816 22:10:04.524486  135006 pod_ready.go:92] pod "coredns-6955765f44-5cdbh" in "kube-system" namespace has status "Ready":"True"
	I0816 22:10:04.524505  135006 pod_ready.go:81] duration metric: took 399.404347ms waiting for pod "coredns-6955765f44-5cdbh" in "kube-system" namespace to be "Ready" ...
	I0816 22:10:04.524515  135006 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-20210816220706-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:10:04.924701  135006 pod_ready.go:92] pod "etcd-test-preload-20210816220706-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:10:04.924721  135006 pod_ready.go:81] duration metric: took 400.200099ms waiting for pod "etcd-test-preload-20210816220706-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:10:04.924733  135006 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-20210816220706-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:10:05.324822  135006 pod_ready.go:92] pod "kube-apiserver-test-preload-20210816220706-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:10:05.324841  135006 pod_ready.go:81] duration metric: took 400.101421ms waiting for pod "kube-apiserver-test-preload-20210816220706-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:10:05.324851  135006 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-20210816220706-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:10:05.725068  135006 pod_ready.go:92] pod "kube-controller-manager-test-preload-20210816220706-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:10:05.725090  135006 pod_ready.go:81] duration metric: took 400.230274ms waiting for pod "kube-controller-manager-test-preload-20210816220706-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:10:05.725106  135006 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-n6mvk" in "kube-system" namespace to be "Ready" ...
	I0816 22:10:06.125256  135006 pod_ready.go:92] pod "kube-proxy-n6mvk" in "kube-system" namespace has status "Ready":"True"
	I0816 22:10:06.125282  135006 pod_ready.go:81] duration metric: took 400.167567ms waiting for pod "kube-proxy-n6mvk" in "kube-system" namespace to be "Ready" ...
	I0816 22:10:06.125298  135006 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-20210816220706-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:10:06.525101  135006 pod_ready.go:92] pod "kube-scheduler-test-preload-20210816220706-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:10:06.525122  135006 pod_ready.go:81] duration metric: took 399.816083ms waiting for pod "kube-scheduler-test-preload-20210816220706-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:10:06.525132  135006 pod_ready.go:38] duration metric: took 2.590362904s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:10:06.525146  135006 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:10:06.525180  135006 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:10:06.545977  135006 api_server.go:70] duration metric: took 2.691189281s to wait for apiserver process to appear ...
	I0816 22:10:06.545996  135006 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:10:06.546004  135006 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0816 22:10:06.550174  135006 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0816 22:10:06.550843  135006 api_server.go:139] control plane version: v1.17.3
	I0816 22:10:06.550861  135006 api_server.go:129] duration metric: took 4.860735ms to wait for apiserver health ...
	I0816 22:10:06.550869  135006 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:10:06.725731  135006 system_pods.go:59] 9 kube-system pods found
	I0816 22:10:06.725760  135006 system_pods.go:61] "coredns-6955765f44-48hr7" [58a77582-822c-4fc0-ab12-61c561140d80] Running
	I0816 22:10:06.725765  135006 system_pods.go:61] "coredns-6955765f44-5cdbh" [255731e0-831a-4b73-b470-4fc360e4d9f7] Running
	I0816 22:10:06.725769  135006 system_pods.go:61] "etcd-test-preload-20210816220706-6487" [11378e4f-5b9a-41ad-b820-996b630a1424] Running
	I0816 22:10:06.725772  135006 system_pods.go:61] "kindnet-rqhqj" [0a087d54-6fdf-4ea4-90ad-61ef85ea5903] Running
	I0816 22:10:06.725776  135006 system_pods.go:61] "kube-apiserver-test-preload-20210816220706-6487" [a00ab7e5-e0a3-4025-844f-2620dc529dd0] Running
	I0816 22:10:06.725782  135006 system_pods.go:61] "kube-controller-manager-test-preload-20210816220706-6487" [17b8eeb3-ca87-49f7-a747-f2df66a1ec83] Running
	I0816 22:10:06.725788  135006 system_pods.go:61] "kube-proxy-n6mvk" [4f4fb91c-1b09-46a7-9ea5-3cf07321a116] Running
	I0816 22:10:06.725793  135006 system_pods.go:61] "kube-scheduler-test-preload-20210816220706-6487" [b5a0fab9-5c04-4891-b30c-a1130c13c9bd] Running
	I0816 22:10:06.725798  135006 system_pods.go:61] "storage-provisioner" [fae42d16-b932-4470-9cd3-c2111bdc755b] Running
	I0816 22:10:06.725815  135006 system_pods.go:74] duration metric: took 174.940234ms to wait for pod list to return data ...
	I0816 22:10:06.725824  135006 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:10:06.924734  135006 default_sa.go:45] found service account: "default"
	I0816 22:10:06.924757  135006 default_sa.go:55] duration metric: took 198.926519ms for default service account to be created ...
	I0816 22:10:06.924768  135006 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:10:07.125607  135006 system_pods.go:86] 9 kube-system pods found
	I0816 22:10:07.125633  135006 system_pods.go:89] "coredns-6955765f44-48hr7" [58a77582-822c-4fc0-ab12-61c561140d80] Running
	I0816 22:10:07.125639  135006 system_pods.go:89] "coredns-6955765f44-5cdbh" [255731e0-831a-4b73-b470-4fc360e4d9f7] Running
	I0816 22:10:07.125643  135006 system_pods.go:89] "etcd-test-preload-20210816220706-6487" [11378e4f-5b9a-41ad-b820-996b630a1424] Running
	I0816 22:10:07.125647  135006 system_pods.go:89] "kindnet-rqhqj" [0a087d54-6fdf-4ea4-90ad-61ef85ea5903] Running
	I0816 22:10:07.125651  135006 system_pods.go:89] "kube-apiserver-test-preload-20210816220706-6487" [a00ab7e5-e0a3-4025-844f-2620dc529dd0] Running
	I0816 22:10:07.125655  135006 system_pods.go:89] "kube-controller-manager-test-preload-20210816220706-6487" [17b8eeb3-ca87-49f7-a747-f2df66a1ec83] Running
	I0816 22:10:07.125658  135006 system_pods.go:89] "kube-proxy-n6mvk" [4f4fb91c-1b09-46a7-9ea5-3cf07321a116] Running
	I0816 22:10:07.125662  135006 system_pods.go:89] "kube-scheduler-test-preload-20210816220706-6487" [b5a0fab9-5c04-4891-b30c-a1130c13c9bd] Running
	I0816 22:10:07.125666  135006 system_pods.go:89] "storage-provisioner" [fae42d16-b932-4470-9cd3-c2111bdc755b] Running
	I0816 22:10:07.125672  135006 system_pods.go:126] duration metric: took 200.898929ms to wait for k8s-apps to be running ...
	I0816 22:10:07.125679  135006 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:10:07.125716  135006 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:10:07.134978  135006 system_svc.go:56] duration metric: took 9.294045ms WaitForService to wait for kubelet.
	I0816 22:10:07.134996  135006 kubeadm.go:547] duration metric: took 3.280212216s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:10:07.135019  135006 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:10:07.325547  135006 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:10:07.325574  135006 node_conditions.go:123] node cpu capacity is 8
	I0816 22:10:07.325586  135006 node_conditions.go:105] duration metric: took 190.561724ms to run NodePressure ...
	I0816 22:10:07.325595  135006 start.go:231] waiting for startup goroutines ...
	I0816 22:10:07.367381  135006 start.go:462] kubectl: 1.20.5, cluster: 1.17.3 (minor skew: 3)
	I0816 22:10:07.369962  135006 out.go:177] 
	W0816 22:10:07.370117  135006 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.17.3.
	I0816 22:10:07.371577  135006 out.go:177]   - Want kubectl v1.17.3? Try 'minikube kubectl -- get pods -A'
	I0816 22:10:07.373081  135006 out.go:177] * Done! kubectl is now configured to use "test-preload-20210816220706-6487" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Mon 2021-08-16 22:07:08 UTC, end at Mon 2021-08-16 22:10:08 UTC. --
	Aug 16 22:09:39 test-preload-20210816220706-6487 crio[4453]: time="2021-08-16 22:09:39.933721723Z" level=info msg="Removed container 38e217fbaedb05e1041e8c8df911440075e03603905ae5944baa7a43ef8a84e1: kube-system/kube-proxy-wj5tb/kube-proxy" id=027c3c93-230c-4c2e-b4a0-6fd686f7ef40 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 16 22:09:39 test-preload-20210816220706-6487 crio[4453]: time="2021-08-16 22:09:39.940934857Z" level=info msg="Ran pod sandbox e398bb5ad80ece022db686236c1e20d78377aae5847f7c2ea01e32f27663f3f7 with infra container: kube-system/coredns-6955765f44-48hr7/POD" id=84abd360-1ff9-4ea6-8ee0-3d438df2f91c name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 16 22:09:39 test-preload-20210816220706-6487 crio[4453]: time="2021-08-16 22:09:39.941595235Z" level=info msg="Checking image status: k8s.gcr.io/coredns:1.6.5" id=cc7bdd44-284b-4244-be0d-2f5c5eb65922 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:09:39 test-preload-20210816220706-6487 crio[4453]: time="2021-08-16 22:09:39.942135934Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,RepoTags:[k8s.gcr.io/coredns:1.6.5],RepoDigests:[k8s.gcr.io/coredns@sha256:1951ff35852d301efa66f57928db5d5caa654068e5c6ea4d7b1c039dad1000d2],Size_:41706553,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=cc7bdd44-284b-4244-be0d-2f5c5eb65922 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:09:39 test-preload-20210816220706-6487 crio[4453]: time="2021-08-16 22:09:39.942694360Z" level=info msg="Checking image status: k8s.gcr.io/coredns:1.6.5" id=36f64196-c7c8-4a7e-aed8-1224983a465d name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:09:39 test-preload-20210816220706-6487 crio[4453]: time="2021-08-16 22:09:39.943311832Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,RepoTags:[k8s.gcr.io/coredns:1.6.5],RepoDigests:[k8s.gcr.io/coredns@sha256:1951ff35852d301efa66f57928db5d5caa654068e5c6ea4d7b1c039dad1000d2],Size_:41706553,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=36f64196-c7c8-4a7e-aed8-1224983a465d name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:09:39 test-preload-20210816220706-6487 crio[4453]: time="2021-08-16 22:09:39.944178545Z" level=info msg="Creating container: kube-system/coredns-6955765f44-48hr7/coredns" id=2fbf1719-5c93-4232-9943-fb82009497ae name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:09:39 test-preload-20210816220706-6487 crio[4453]: time="2021-08-16 22:09:39.956162567Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/4d705caa3ffd189933d99942e236f9435006d7f4a5915fd1f2886084a04969e4/merged/etc/passwd: no such file or directory"
	Aug 16 22:09:39 test-preload-20210816220706-6487 crio[4453]: time="2021-08-16 22:09:39.956206059Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/4d705caa3ffd189933d99942e236f9435006d7f4a5915fd1f2886084a04969e4/merged/etc/group: no such file or directory"
	Aug 16 22:09:40 test-preload-20210816220706-6487 crio[4453]: time="2021-08-16 22:09:40.091182877Z" level=info msg="Created container 0c8d37f573b799218bcae8d1f3b0d174ff59c27cca018c3a4f4ca6f93dfcdb98: kube-system/coredns-6955765f44-48hr7/coredns" id=2fbf1719-5c93-4232-9943-fb82009497ae name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:09:40 test-preload-20210816220706-6487 crio[4453]: time="2021-08-16 22:09:40.091669415Z" level=info msg="Starting container: 0c8d37f573b799218bcae8d1f3b0d174ff59c27cca018c3a4f4ca6f93dfcdb98" id=e0a0b076-13f3-450e-b6f6-edf212065b03 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:09:40 test-preload-20210816220706-6487 crio[4453]: time="2021-08-16 22:09:40.100792925Z" level=info msg="Started container 0c8d37f573b799218bcae8d1f3b0d174ff59c27cca018c3a4f4ca6f93dfcdb98: kube-system/coredns-6955765f44-48hr7/coredns" id=e0a0b076-13f3-450e-b6f6-edf212065b03 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:09:40 test-preload-20210816220706-6487 crio[4453]: time="2021-08-16 22:09:40.902181489Z" level=info msg="Stopping pod sandbox: de0ac02e877c59265044faf0365a4bee6208e4c6286d410b9b8765fd02f5826d" id=e4b8a081-ef18-4891-8063-7510aad3f3b6 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 16 22:09:40 test-preload-20210816220706-6487 crio[4453]: time="2021-08-16 22:09:40.902235172Z" level=info msg="Stopped pod sandbox (already stopped): de0ac02e877c59265044faf0365a4bee6208e4c6286d410b9b8765fd02f5826d" id=e4b8a081-ef18-4891-8063-7510aad3f3b6 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 16 22:09:41 test-preload-20210816220706-6487 crio[4453]: time="2021-08-16 22:09:41.223773732Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-n6mvk/POD" id=b7a74d97-eeaf-4a03-b773-3d5ec69a5981 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 16 22:09:41 test-preload-20210816220706-6487 crio[4453]: time="2021-08-16 22:09:41.355188240Z" level=info msg="Ran pod sandbox da3133b1648e3558bbf7291204efbc9ffb349f52bd0d17169c0d2920eb9e3260 with infra container: kube-system/kube-proxy-n6mvk/POD" id=b7a74d97-eeaf-4a03-b773-3d5ec69a5981 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 16 22:09:41 test-preload-20210816220706-6487 crio[4453]: time="2021-08-16 22:09:41.356075749Z" level=info msg="Checking image status: k8s.gcr.io/kube-proxy:v1.17.3" id=f1fc2e2d-7b18-428a-8132-b31ec8bbf4f6 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:09:41 test-preload-20210816220706-6487 crio[4453]: time="2021-08-16 22:09:41.356838574Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ae853e93800dc2572aeb425e5765cf9b25212bfc43695299e61dece06cffa4a1,RepoTags:[k8s.gcr.io/kube-proxy:v1.17.3],RepoDigests:[k8s.gcr.io/kube-proxy@sha256:eda5dcdcce414ad866e3e30cf7ec3e35684ef0d7db3f097b31082b708d88c7a1],Size_:117949983,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=f1fc2e2d-7b18-428a-8132-b31ec8bbf4f6 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:09:41 test-preload-20210816220706-6487 crio[4453]: time="2021-08-16 22:09:41.357483437Z" level=info msg="Checking image status: k8s.gcr.io/kube-proxy:v1.17.3" id=ac931740-a304-491d-adca-fe841db3dac1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:09:41 test-preload-20210816220706-6487 crio[4453]: time="2021-08-16 22:09:41.358201761Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ae853e93800dc2572aeb425e5765cf9b25212bfc43695299e61dece06cffa4a1,RepoTags:[k8s.gcr.io/kube-proxy:v1.17.3],RepoDigests:[k8s.gcr.io/kube-proxy@sha256:eda5dcdcce414ad866e3e30cf7ec3e35684ef0d7db3f097b31082b708d88c7a1],Size_:117949983,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ac931740-a304-491d-adca-fe841db3dac1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:09:41 test-preload-20210816220706-6487 crio[4453]: time="2021-08-16 22:09:41.358944022Z" level=info msg="Creating container: kube-system/kube-proxy-n6mvk/kube-proxy" id=da27d275-b0ca-473a-add0-9819ce16e74a name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:09:41 test-preload-20210816220706-6487 crio[4453]: time="2021-08-16 22:09:41.458785300Z" level=info msg="Created container 75d43e418a9259c80672ed04e6ed68c9a0b69c30dff04bc89bed4cd06c6bc260: kube-system/kube-proxy-n6mvk/kube-proxy" id=da27d275-b0ca-473a-add0-9819ce16e74a name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:09:41 test-preload-20210816220706-6487 crio[4453]: time="2021-08-16 22:09:41.459319771Z" level=info msg="Starting container: 75d43e418a9259c80672ed04e6ed68c9a0b69c30dff04bc89bed4cd06c6bc260" id=a6d70d93-2f30-4068-b412-c78a302c24bd name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:09:41 test-preload-20210816220706-6487 crio[4453]: time="2021-08-16 22:09:41.468892126Z" level=info msg="Started container 75d43e418a9259c80672ed04e6ed68c9a0b69c30dff04bc89bed4cd06c6bc260: kube-system/kube-proxy-n6mvk/kube-proxy" id=a6d70d93-2f30-4068-b412-c78a302c24bd name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:10:03 test-preload-20210816220706-6487 crio[4453]: time="2021-08-16 22:10:03.361308296Z" level=info msg="Stopping container: 0c8d37f573b799218bcae8d1f3b0d174ff59c27cca018c3a4f4ca6f93dfcdb98 (timeout: 30s)" id=78736a74-be87-4aa6-8c35-8a5548ec6ddf name=/runtime.v1alpha2.RuntimeService/StopContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                      ATTEMPT             POD ID
	75d43e418a925       ae853e93800dc2572aeb425e5765cf9b25212bfc43695299e61dece06cffa4a1                                     27 seconds ago       Running             kube-proxy                0                   da3133b1648e3
	0c8d37f573b79       70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61                                     28 seconds ago       Exited              coredns                   0                   e398bb5ad80ec
	6d2140556b3be       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                     41 seconds ago       Running             storage-provisioner       1                   7b3a19b753e8f
	abf098330bf02       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb                                     42 seconds ago       Running             kindnet-cni               1                   39a8ed9965e6f
	6034300bdc9f9       70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61                                     42 seconds ago       Running             coredns                   1                   48abd00c5cb4f
	18cec30e74003       d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad                                     48 seconds ago       Running             kube-scheduler            0                   3d0e59baabb11
	0db980e30c35a       b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302                                     48 seconds ago       Running             kube-controller-manager   0                   40aec91365b62
	58641bf98a7ad       90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b                                     48 seconds ago       Running             kube-apiserver            0                   57ae52833ce95
	a763eb46578f7       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                     48 seconds ago       Running             etcd                      1                   021f018475a4b
	febcfcb711ef6       70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61                                     About a minute ago   Exited              coredns                   0                   48abd00c5cb4f
	ab6cb51a35acc       docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1   About a minute ago   Exited              kindnet-cni               0                   39a8ed9965e6f
	1da8ca16721bb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                     About a minute ago   Exited              storage-provisioner       0                   7b3a19b753e8f
	9bea27bcbbbbf       78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28                                     2 minutes ago        Exited              kube-scheduler            0                   b04a18187019f
	66694634fd317       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                     2 minutes ago        Exited              etcd                      0                   021f018475a4b
	65aed2c6f8e06       0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2                                     2 minutes ago        Exited              kube-apiserver            0                   9185aac4b4c00
	3092b31ecd6bb       5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056                                     2 minutes ago        Exited              kube-controller-manager   0                   1e64d6ee94792
	
	* 
	* ==> coredns [0c8d37f573b799218bcae8d1f3b0d174ff59c27cca018c3a4f4ca6f93dfcdb98] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = ef6277933dc1da9d32a131dbf5945040
	CoreDNS-1.6.5
	linux/amd64, go1.13.4, c2fd1b2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [6034300bdc9f999719ccfafeae6b0533ecbc28e85c2d3f30e6d6fe38d7c9ce12] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = ef6277933dc1da9d32a131dbf5945040
	CoreDNS-1.6.5
	linux/amd64, go1.13.4, c2fd1b2
	
	* 
	* ==> coredns [febcfcb711ef632bf1215fc8187796fa35fadbb6d235db09a9d2951e0b27c44a] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = ef6277933dc1da9d32a131dbf5945040
	CoreDNS-1.6.5
	linux/amd64, go1.13.4, c2fd1b2
	
	* 
	* ==> describe nodes <==
	* Name:               test-preload-20210816220706-6487
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-20210816220706-6487
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
	                    minikube.k8s.io/name=test-preload-20210816220706-6487
	                    minikube.k8s.io/updated_at=2021_08_16T22_08_13_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Aug 2021 22:08:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-20210816220706-6487
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Aug 2021 22:10:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Aug 2021 22:09:24 +0000   Mon, 16 Aug 2021 22:08:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Aug 2021 22:09:24 +0000   Mon, 16 Aug 2021 22:08:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Aug 2021 22:09:24 +0000   Mon, 16 Aug 2021 22:08:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Aug 2021 22:09:24 +0000   Mon, 16 Aug 2021 22:08:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    test-preload-20210816220706-6487
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                56002a56-8a94-4364-9ae4-328d2f87119d
	  Boot ID:                    fb7b5690-fedc-46af-96ea-1f6e59faa09d
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.17.3
	  Kube-Proxy Version:         v1.17.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6955765f44-48hr7                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     29s
	  kube-system                 coredns-6955765f44-5cdbh                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     101s
	  kube-system                 etcd-test-preload-20210816220706-6487                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         115s
	  kube-system                 kindnet-rqhqj                                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      100s
	  kube-system                 kube-apiserver-test-preload-20210816220706-6487             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         115s
	  kube-system                 kube-controller-manager-test-preload-20210816220706-6487    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         115s
	  kube-system                 kube-proxy-n6mvk                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kube-system                 kube-scheduler-test-preload-20210816220706-6487             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         115s
	  kube-system                 storage-provisioner                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             190Mi (0%!)(MISSING)  390Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From                                          Message
	  ----    ------                   ----                 ----                                          -------
	  Normal  Starting                 2m4s                 kubelet, test-preload-20210816220706-6487     Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m4s (x4 over 2m4s)  kubelet, test-preload-20210816220706-6487     Node test-preload-20210816220706-6487 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s (x3 over 2m4s)  kubelet, test-preload-20210816220706-6487     Node test-preload-20210816220706-6487 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x4 over 2m4s)  kubelet, test-preload-20210816220706-6487     Node test-preload-20210816220706-6487 status is now: NodeHasSufficientPID
	  Normal  Starting                 116s                 kubelet, test-preload-20210816220706-6487     Starting kubelet.
	  Normal  NodeHasSufficientMemory  116s                 kubelet, test-preload-20210816220706-6487     Node test-preload-20210816220706-6487 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s                 kubelet, test-preload-20210816220706-6487     Node test-preload-20210816220706-6487 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s                 kubelet, test-preload-20210816220706-6487     Node test-preload-20210816220706-6487 status is now: NodeHasSufficientPID
	  Normal  NodeReady                106s                 kubelet, test-preload-20210816220706-6487     Node test-preload-20210816220706-6487 status is now: NodeReady
	  Normal  Starting                 100s                 kube-proxy, test-preload-20210816220706-6487  Starting kube-proxy.
	  Normal  Starting                 50s                  kubelet, test-preload-20210816220706-6487     Starting kubelet.
	  Normal  NodeHasSufficientMemory  49s (x8 over 50s)    kubelet, test-preload-20210816220706-6487     Node test-preload-20210816220706-6487 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s (x8 over 50s)    kubelet, test-preload-20210816220706-6487     Node test-preload-20210816220706-6487 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s (x8 over 50s)    kubelet, test-preload-20210816220706-6487     Node test-preload-20210816220706-6487 status is now: NodeHasSufficientPID
	  Normal  Starting                 42s                  kube-proxy, test-preload-20210816220706-6487  Starting kube-proxy.
	  Normal  Starting                 27s                  kube-proxy, test-preload-20210816220706-6487  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000019] ll header: 00000000: 02 42 40 0d 8b 34 02 42 c0 a8 31 02 08 00        .B@..4.B..1...
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-4350192b6afa
	[  +0.000007] ll header: 00000000: 02 42 40 0d 8b 34 02 42 c0 a8 31 02 08 00        .B@..4.B..1...
	[Aug16 22:02] cgroup: cgroup2: unknown option "nsdelegate"
	[ +27.902031] cgroup: cgroup2: unknown option "nsdelegate"
	[  +6.330213] IPv4: martian source 10.244.1.2 from 10.244.1.2, on dev vethf50720c1
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff ce 6a 98 24 a5 30 08 06        .......j.$.0..
	[Aug16 22:03] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug16 22:04] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth5f3972f4
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff a6 11 09 e8 7e 16 08 06        ..........~...
	[  +0.300282] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth21ab231d
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff b6 d8 3f 3b 8d ae 08 06        ........?;....
	[ +17.949768] cgroup: cgroup2: unknown option "nsdelegate"
	[ +27.154757] IPv4: martian source 10.244.1.2 from 10.244.1.2, on dev veth624986e0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff b2 9e 30 d7 60 0b 08 06        ........0.`...
	[  +2.581947] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug16 22:07] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug16 22:08] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff c6 82 c6 f9 93 19 08 06        ..............
	[  +0.000004] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff c6 82 c6 f9 93 19 08 06        ..............
	[ +11.413047] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethd9e49d47
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 12 92 46 65 b3 c4 08 06        ........Fe....
	[Aug16 22:09] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev vethc3dd548c
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 92 93 89 b0 1e dc 08 06        ..............
	
	* 
	* ==> etcd [66694634fd317233a47b80a64b79c06558b8aba0d05843bd39f5e1c122f36d5e] <==
	* raft2021/08/16 22:08:06 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-16 22:08:06.238661 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2021-08-16 22:08:06.240072 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-16 22:08:06.240238 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-16 22:08:06.240321 I | embed: listening for peers on 192.168.49.2:2380
	raft2021/08/16 22:08:06 INFO: aec36adc501070cc is starting a new election at term 1
	raft2021/08/16 22:08:06 INFO: aec36adc501070cc became candidate at term 2
	raft2021/08/16 22:08:06 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2021/08/16 22:08:06 INFO: aec36adc501070cc became leader at term 2
	raft2021/08/16 22:08:06 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2021-08-16 22:08:06.932181 I | embed: ready to serve client requests
	2021-08-16 22:08:06.932237 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-16 22:08:06.932299 I | etcdserver: published {Name:test-preload-20210816220706-6487 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-16 22:08:06.932339 I | embed: ready to serve client requests
	2021-08-16 22:08:06.934198 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-16 22:08:06.934277 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-16 22:08:06.934570 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-16 22:08:06.935030 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-16 22:08:14.569556 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (589.032581ms) to execute
	2021-08-16 22:08:14.569769 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-test-preload-20210816220706-6487\" " with result "range_response_count:1 size:1321" took too long (633.663238ms) to execute
	2021-08-16 22:08:24.865526 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/pod-garbage-collector\" " with result "range_response_count:1 size:210" took too long (179.617541ms) to execute
	2021-08-16 22:08:32.011158 W | wal: sync duration of 2.164522685s, expected less than 1s
	2021-08-16 22:08:32.012191 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-6955765f44-5cdbh\" " with result "range_response_count:1 size:1703" took too long (1.834206767s) to execute
	2021-08-16 22:08:32.012298 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:172" took too long (784.783128ms) to execute
	2021-08-16 22:08:32.012412 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-proxy-wj5tb\" " with result "range_response_count:1 size:2163" took too long (1.167486855s) to execute
	
	* 
	* ==> etcd [a763eb46578f78ccd012748508d22d47578c1ae9b958676ab1721ad40520bb53] <==
	* raft2021/08/16 22:09:19 INFO: aec36adc501070cc switched to configuration voters=()
	raft2021/08/16 22:09:19 INFO: aec36adc501070cc became follower at term 2
	raft2021/08/16 22:09:19 INFO: newRaft aec36adc501070cc [peers: [], term: 2, commit: 458, applied: 0, lastindex: 458, lastterm: 2]
	2021-08-16 22:09:19.818114 W | auth: simple token is not cryptographically signed
	2021-08-16 22:09:19.820103 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	raft2021/08/16 22:09:19 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-16 22:09:19.820849 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2021-08-16 22:09:19.820964 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-16 22:09:19.821026 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-16 22:09:19.822675 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-16 22:09:19.822771 I | embed: listening for peers on 192.168.49.2:2380
	2021-08-16 22:09:19.823235 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/16 22:09:21 INFO: aec36adc501070cc is starting a new election at term 2
	raft2021/08/16 22:09:21 INFO: aec36adc501070cc became candidate at term 3
	raft2021/08/16 22:09:21 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3
	raft2021/08/16 22:09:21 INFO: aec36adc501070cc became leader at term 3
	raft2021/08/16 22:09:21 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3
	2021-08-16 22:09:21.714461 I | etcdserver: published {Name:test-preload-20210816220706-6487 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-16 22:09:21.714482 I | embed: ready to serve client requests
	2021-08-16 22:09:21.714567 I | embed: ready to serve client requests
	2021-08-16 22:09:21.715663 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-16 22:09:21.716140 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-16 22:09:39.460829 W | etcdserver: request "header:<ID:8128007015370704651 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-6955765f44\" mod_revision:416 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-6955765f44\" value_size:1208 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-6955765f44\" > >>" with result "size:16" took too long (133.155242ms) to execute
	2021-08-16 22:09:51.229104 W | etcdserver: request "header:<ID:8128007015370704748 > lease_revoke:<id:70cc7b51030dc6d5>" with result "size:29" took too long (131.307695ms) to execute
	2021-08-16 22:09:51.229227 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-6955765f44-48hr7\" " with result "range_response_count:1 size:1879" took too long (402.20251ms) to execute
	
	* 
	* ==> kernel <==
	*  22:10:08 up 49 min,  0 users,  load average: 1.08, 1.22, 0.95
	Linux test-preload-20210816220706-6487 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [58641bf98a7ad6421a27688e8029e041753df5d22b7327843abe4a4d44e5aca1] <==
	* I0816 22:09:23.973995       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0816 22:09:23.974017       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0816 22:09:23.974031       1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0816 22:09:23.974278       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0816 22:09:23.974329       1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
	E0816 22:09:24.030501       1 controller.go:151] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0816 22:09:24.112116       1 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller 
	I0816 22:09:24.112166       1 cache.go:39] Caches are synced for autoregister controller
	I0816 22:09:24.112381       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 22:09:24.112745       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0816 22:09:24.113052       1 shared_informer.go:204] Caches are synced for crd-autoregister 
	I0816 22:09:24.123119       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 22:09:24.972384       1 controller.go:107] OpenAPI AggregationController: Processing item 
	I0816 22:09:24.972411       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0816 22:09:24.972422       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0816 22:09:24.975938       1 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
	I0816 22:09:25.665677       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0816 22:09:25.748059       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0816 22:09:25.761953       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0816 22:09:25.806968       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 22:09:25.816379       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0816 22:09:38.996428       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0816 22:09:39.344970       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0816 22:09:39.393399       1 controller.go:606] quota admission added evaluator for: endpoints
	I0816 22:09:39.473719       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-apiserver [65aed2c6f8e06e2c095d5a1363a4a5e92b5e2932622e09e443e201b030c51756] <==
	* W0816 22:09:17.182042       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0816 22:09:17.182044       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	I0816 22:09:17.182080       1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:09:17.182163       1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:09:17.182177       1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:09:17.182215       1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:09:17.182259       1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:09:17.182293       1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:09:17.182313       1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
	W0816 22:09:17.182329       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0816 22:09:17.182357       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0816 22:09:17.182361       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0816 22:09:17.182375       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0816 22:09:17.182381       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0816 22:09:17.182401       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0816 22:09:17.182407       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0816 22:09:17.182408       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0816 22:09:17.182413       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0816 22:09:17.182421       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0816 22:09:17.182436       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0816 22:09:17.182442       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0816 22:09:17.182446       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0816 22:09:17.182460       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0816 22:09:17.182463       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0816 22:09:17.182523       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-controller-manager [0db980e30c35a593730c4f1482c9e9a6126b8d4462cbca11127b292f464dd7c4] <==
	* I0816 22:09:39.144135       1 shared_informer.go:204] Caches are synced for persistent volume 
	I0816 22:09:39.159123       1 shared_informer.go:204] Caches are synced for bootstrap_signer 
	I0816 22:09:39.179027       1 shared_informer.go:204] Caches are synced for expand 
	I0816 22:09:39.217808       1 shared_informer.go:204] Caches are synced for stateful set 
	I0816 22:09:39.242015       1 shared_informer.go:204] Caches are synced for PVC protection 
	I0816 22:09:39.341946       1 shared_informer.go:204] Caches are synced for daemon sets 
	I0816 22:09:39.370900       1 shared_informer.go:204] Caches are synced for taint 
	I0816 22:09:39.370970       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	W0816 22:09:39.371018       1 node_lifecycle_controller.go:1058] Missing timestamp for Node test-preload-20210816220706-6487. Assuming now as a timestamp.
	I0816 22:09:39.371027       1 taint_manager.go:186] Starting NoExecuteTaintManager
	I0816 22:09:39.371064       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
	I0816 22:09:39.371162       1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"test-preload-20210816220706-6487", UID:"17a93bcc-05a9-4a81-8f60-79337b1a8a46", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node test-preload-20210816220706-6487 event: Registered Node test-preload-20210816220706-6487 in Controller
	I0816 22:09:39.392222       1 shared_informer.go:204] Caches are synced for endpoint 
	I0816 22:09:39.432999       1 shared_informer.go:204] Caches are synced for resource quota 
	I0816 22:09:39.454164       1 shared_informer.go:204] Caches are synced for attach detach 
	I0816 22:09:39.461942       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"980fb434-694b-4e0b-8731-ed3f175dec15", APIVersion:"apps/v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-6955765f44 to 2
	I0816 22:09:39.461997       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0816 22:09:39.462008       1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0816 22:09:39.467420       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"71462b63-3be8-4c90-9671-734ae8a2d36c", APIVersion:"apps/v1", ResourceVersion:"513", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-48hr7
	I0816 22:09:39.467549       1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"46766b0f-d8b9-4737-8bc6-23abafce8d96", APIVersion:"apps/v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: kube-proxy-wj5tb
	I0816 22:09:39.492341       1 shared_informer.go:204] Caches are synced for resource quota 
	I0816 22:09:39.494029       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0816 22:09:40.915285       1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"46766b0f-d8b9-4737-8bc6-23abafce8d96", APIVersion:"apps/v1", ResourceVersion:"533", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-n6mvk
	I0816 22:10:03.355321       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"980fb434-694b-4e0b-8731-ed3f175dec15", APIVersion:"apps/v1", ResourceVersion:"572", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-6955765f44 to 1
	I0816 22:10:03.359391       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"71462b63-3be8-4c90-9671-734ae8a2d36c", APIVersion:"apps/v1", ResourceVersion:"573", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-6955765f44-48hr7
	
	* 
	* ==> kube-controller-manager [3092b31ecd6bb4312bed487c67745381aef9a261121af23448b73b4d2aebf293] <==
	* E0816 22:09:17.369392       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Job: Get https://control-plane.minikube.internal:8443/apis/batch/v1/jobs?allowWatchBookmarks=true&resourceVersion=1&timeout=6m0s&timeoutSeconds=360&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0816 22:09:17.369395       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Namespace: Get https://control-plane.minikube.internal:8443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=146&timeout=5m3s&timeoutSeconds=303&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0816 22:09:17.369408       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ResourceQuota: Get https://control-plane.minikube.internal:8443/api/v1/resourcequotas?allowWatchBookmarks=true&resourceVersion=1&timeout=7m11s&timeoutSeconds=431&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0816 22:09:17.369418       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=36&timeout=8m35s&timeoutSeconds=515&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0816 22:09:17.369429       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.LimitRange: Get https://control-plane.minikube.internal:8443/api/v1/limitranges?allowWatchBookmarks=true&resourceVersion=1&timeout=7m8s&timeoutSeconds=428&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0816 22:09:17.369431       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/configmaps?allowWatchBookmarks=true&resourceVersion=355&timeout=9m7s&timeoutSeconds=547&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0816 22:09:17.369451       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=1&timeout=9m49s&timeoutSeconds=589&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0816 22:09:17.369451       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.RoleBinding: Get https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1/rolebindings?allowWatchBookmarks=true&resourceVersion=361&timeout=6m50s&timeoutSeconds=410&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0816 22:09:17.369461       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Lease: Get https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=430&timeout=9m17s&timeoutSeconds=557&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0816 22:09:17.369467       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodSecurityPolicy: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/podsecuritypolicies?allowWatchBookmarks=true&resourceVersion=1&timeout=6m52s&timeoutSeconds=412&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0816 22:09:17.369499       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ClusterRole: Get https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1/clusterroles?allowWatchBookmarks=true&resourceVersion=333&timeout=6m23s&timeoutSeconds=383&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0816 22:09:17.369537       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CronJob: Get https://control-plane.minikube.internal:8443/apis/batch/v1beta1/cronjobs?allowWatchBookmarks=true&resourceVersion=1&timeout=9m47s&timeoutSeconds=587&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0816 22:09:17.369540       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Role: Get https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1/roles?allowWatchBookmarks=true&resourceVersion=360&timeout=5m22s&timeoutSeconds=322&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0816 22:09:17.369544       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=1&timeout=7m40s&timeoutSeconds=460&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0816 22:09:17.369552       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Secret: Get https://control-plane.minikube.internal:8443/api/v1/secrets?allowWatchBookmarks=true&resourceVersion=358&timeout=9m19s&timeoutSeconds=559&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0816 22:09:17.369572       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.EndpointSlice: Get https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1beta1/endpointslices?allowWatchBookmarks=true&resourceVersion=1&timeout=7m15s&timeoutSeconds=435&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0816 22:09:17.369720       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=416&timeout=9m12s&timeoutSeconds=552&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0816 22:09:17.369732       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.VolumeAttachment: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/volumeattachments?allowWatchBookmarks=true&resourceVersion=1&timeout=5m42s&timeoutSeconds=342&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0816 22:09:17.369723       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=1&timeout=6m36s&timeoutSeconds=396&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0816 22:09:17.369755       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.NetworkPolicy: Get https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=1&timeout=7m28s&timeoutSeconds=448&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0816 22:09:17.369760       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=431&timeout=9m22s&timeoutSeconds=562&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0816 22:09:17.369785       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=204&timeout=7m28s&timeoutSeconds=448&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0816 22:09:17.369903       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PodTemplate: Get https://control-plane.minikube.internal:8443/api/v1/podtemplates?allowWatchBookmarks=true&resourceVersion=1&timeout=6m25s&timeoutSeconds=385&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0816 22:09:17.369975       1 reflector.go:320] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://control-plane.minikube.internal:8443/apis/apiregistration.k8s.io/v1/apiservices?allowWatchBookmarks=true&resourceVersion=42&timeout=9m27s&timeoutSeconds=567&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0816 22:09:17.369972       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ControllerRevision: Get https://control-plane.minikube.internal:8443/apis/apps/v1/controllerrevisions?allowWatchBookmarks=true&resourceVersion=340&timeout=6m48s&timeoutSeconds=408&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	
	* 
	* ==> kube-proxy [75d43e418a9259c80672ed04e6ed68c9a0b69c30dff04bc89bed4cd06c6bc260] <==
	* W0816 22:09:41.522528       1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
	I0816 22:09:41.528944       1 node.go:135] Successfully retrieved node IP: 192.168.49.2
	I0816 22:09:41.528969       1 server_others.go:145] Using iptables Proxier.
	I0816 22:09:41.529143       1 server.go:571] Version: v1.17.3
	I0816 22:09:41.529602       1 config.go:313] Starting service config controller
	I0816 22:09:41.529631       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0816 22:09:41.529817       1 config.go:131] Starting endpoints config controller
	I0816 22:09:41.529838       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0816 22:09:41.629852       1 shared_informer.go:204] Caches are synced for service config 
	I0816 22:09:41.629979       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [18cec30e74003970d2ce661652922ba5593bedbc914dd4a63711f12c09b4b446] <==
	* I0816 22:09:20.686325       1 serving.go:312] Generated self-signed cert in-memory
	W0816 22:09:20.993059       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
	W0816 22:09:20.993108       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
	W0816 22:09:24.024334       1 authentication.go:348] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0816 22:09:24.024455       1 authentication.go:296] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 22:09:24.024496       1 authentication.go:297] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 22:09:24.024530       1 authentication.go:298] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	W0816 22:09:24.037892       1 authorization.go:47] Authorization is disabled
	W0816 22:09:24.037911       1 authentication.go:92] Authentication is disabled
	I0816 22:09:24.037922       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0816 22:09:24.039133       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 22:09:24.039168       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 22:09:24.039539       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0816 22:09:24.039661       1 tlsconfig.go:219] Starting DynamicServingCertificateController
	I0816 22:09:24.139351       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [9bea27bcbbbbffff5ba82ea900470db0d293e82e8e84b670c6953959bca82842] <==
	* W0816 22:08:09.926514       1 authentication.go:92] Authentication is disabled
	I0816 22:08:09.926530       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0816 22:08:09.927585       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 22:08:09.927617       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 22:08:09.927777       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0816 22:08:09.927801       1 tlsconfig.go:219] Starting DynamicServingCertificateController
	E0816 22:08:09.930362       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:08:09.930426       1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 22:08:09.930360       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:08:09.930579       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:08:09.930585       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:08:09.930625       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:08:09.930742       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:08:09.931121       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:08:09.931187       1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:08:09.931366       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:08:09.931540       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:08:09.932786       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:08:10.932524       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:08:10.932917       1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 22:08:10.933454       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:08:10.934700       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:08:10.935870       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0816 22:08:12.027780       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0816 22:08:27.552772       1 factory.go:494] pod is already present in unschedulableQ
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:07:08 UTC, end at Mon 2021-08-16 22:10:08 UTC. --
	Aug 16 22:09:25 test-preload-20210816220706-6487 kubelet[6423]: W0816 22:09:25.872455    6423 pod_container_deletor.go:75] Container "9185aac4b4c007d2cac3ef21321ad38ababd4defca9a52282c2e6089dd23c32b" not found in pod's containers
	Aug 16 22:09:25 test-preload-20210816220706-6487 kubelet[6423]: W0816 22:09:25.873315    6423 pod_container_deletor.go:75] Container "1e64d6ee9479283ea365aeca2b5f7a63051c55edd64d18ddb83a155d0c5365f6" not found in pod's containers
	Aug 16 22:09:25 test-preload-20210816220706-6487 kubelet[6423]: W0816 22:09:25.874181    6423 pod_container_deletor.go:75] Container "b04a18187019f6d7e17187fce40893876d20e040a7273594fc62053434c76123" not found in pod's containers
	Aug 16 22:09:39 test-preload-20210816220706-6487 kubelet[6423]: I0816 22:09:39.577982    6423 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/58a77582-822c-4fc0-ab12-61c561140d80-config-volume") pod "coredns-6955765f44-48hr7" (UID: "58a77582-822c-4fc0-ab12-61c561140d80")
	Aug 16 22:09:39 test-preload-20210816220706-6487 kubelet[6423]: I0816 22:09:39.578024    6423 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-hx4gh" (UniqueName: "kubernetes.io/secret/58a77582-822c-4fc0-ab12-61c561140d80-coredns-token-hx4gh") pod "coredns-6955765f44-48hr7" (UID: "58a77582-822c-4fc0-ab12-61c561140d80")
	Aug 16 22:09:39 test-preload-20210816220706-6487 kubelet[6423]: E0816 22:09:39.934230    6423 remote_runtime.go:295] ContainerStatus "b3e5c1d8df983f40ee76618207c578551d922d07b34a983959bf98a27ce45e80" from runtime service failed: rpc error: code = NotFound desc = could not find container "b3e5c1d8df983f40ee76618207c578551d922d07b34a983959bf98a27ce45e80": container with ID starting with b3e5c1d8df983f40ee76618207c578551d922d07b34a983959bf98a27ce45e80 not found: ID does not exist
	Aug 16 22:09:39 test-preload-20210816220706-6487 kubelet[6423]: E0816 22:09:39.934564    6423 remote_runtime.go:295] ContainerStatus "38e217fbaedb05e1041e8c8df911440075e03603905ae5944baa7a43ef8a84e1" from runtime service failed: rpc error: code = NotFound desc = could not find container "38e217fbaedb05e1041e8c8df911440075e03603905ae5944baa7a43ef8a84e1": container with ID starting with 38e217fbaedb05e1041e8c8df911440075e03603905ae5944baa7a43ef8a84e1 not found: ID does not exist
	Aug 16 22:09:39 test-preload-20210816220706-6487 kubelet[6423]: I0816 22:09:39.978819    6423 reconciler.go:183] operationExecutor.UnmountVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/2dca3ccc-d899-4c38-ae23-c8c33768dd73-kube-proxy") pod "2dca3ccc-d899-4c38-ae23-c8c33768dd73" (UID: "2dca3ccc-d899-4c38-ae23-c8c33768dd73")
	Aug 16 22:09:39 test-preload-20210816220706-6487 kubelet[6423]: I0816 22:09:39.978864    6423 reconciler.go:183] operationExecutor.UnmountVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/2dca3ccc-d899-4c38-ae23-c8c33768dd73-lib-modules") pod "2dca3ccc-d899-4c38-ae23-c8c33768dd73" (UID: "2dca3ccc-d899-4c38-ae23-c8c33768dd73")
	Aug 16 22:09:39 test-preload-20210816220706-6487 kubelet[6423]: I0816 22:09:39.978894    6423 reconciler.go:183] operationExecutor.UnmountVolume started for volume "kube-proxy-token-4gt5t" (UniqueName: "kubernetes.io/secret/2dca3ccc-d899-4c38-ae23-c8c33768dd73-kube-proxy-token-4gt5t") pod "2dca3ccc-d899-4c38-ae23-c8c33768dd73" (UID: "2dca3ccc-d899-4c38-ae23-c8c33768dd73")
	Aug 16 22:09:39 test-preload-20210816220706-6487 kubelet[6423]: I0816 22:09:39.978919    6423 reconciler.go:183] operationExecutor.UnmountVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/2dca3ccc-d899-4c38-ae23-c8c33768dd73-xtables-lock") pod "2dca3ccc-d899-4c38-ae23-c8c33768dd73" (UID: "2dca3ccc-d899-4c38-ae23-c8c33768dd73")
	Aug 16 22:09:39 test-preload-20210816220706-6487 kubelet[6423]: I0816 22:09:39.978935    6423 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dca3ccc-d899-4c38-ae23-c8c33768dd73-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2dca3ccc-d899-4c38-ae23-c8c33768dd73" (UID: "2dca3ccc-d899-4c38-ae23-c8c33768dd73"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 16 22:09:39 test-preload-20210816220706-6487 kubelet[6423]: I0816 22:09:39.978976    6423 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dca3ccc-d899-4c38-ae23-c8c33768dd73-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2dca3ccc-d899-4c38-ae23-c8c33768dd73" (UID: "2dca3ccc-d899-4c38-ae23-c8c33768dd73"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 16 22:09:39 test-preload-20210816220706-6487 kubelet[6423]: W0816 22:09:39.978986    6423 empty_dir.go:418] Warning: Failed to clear quota on /var/lib/kubelet/pods/2dca3ccc-d899-4c38-ae23-c8c33768dd73/volumes/kubernetes.io~configmap/kube-proxy: ClearQuota called, but quotas disabled
	Aug 16 22:09:39 test-preload-20210816220706-6487 kubelet[6423]: I0816 22:09:39.979218    6423 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dca3ccc-d899-4c38-ae23-c8c33768dd73-kube-proxy" (OuterVolumeSpecName: "kube-proxy") pod "2dca3ccc-d899-4c38-ae23-c8c33768dd73" (UID: "2dca3ccc-d899-4c38-ae23-c8c33768dd73"). InnerVolumeSpecName "kube-proxy". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 16 22:09:40 test-preload-20210816220706-6487 kubelet[6423]: I0816 22:09:40.008264    6423 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dca3ccc-d899-4c38-ae23-c8c33768dd73-kube-proxy-token-4gt5t" (OuterVolumeSpecName: "kube-proxy-token-4gt5t") pod "2dca3ccc-d899-4c38-ae23-c8c33768dd73" (UID: "2dca3ccc-d899-4c38-ae23-c8c33768dd73"). InnerVolumeSpecName "kube-proxy-token-4gt5t". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 16 22:09:40 test-preload-20210816220706-6487 kubelet[6423]: I0816 22:09:40.079147    6423 reconciler.go:303] Volume detached for volume "kube-proxy-token-4gt5t" (UniqueName: "kubernetes.io/secret/2dca3ccc-d899-4c38-ae23-c8c33768dd73-kube-proxy-token-4gt5t") on node "test-preload-20210816220706-6487" DevicePath ""
	Aug 16 22:09:40 test-preload-20210816220706-6487 kubelet[6423]: I0816 22:09:40.079173    6423 reconciler.go:303] Volume detached for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/2dca3ccc-d899-4c38-ae23-c8c33768dd73-xtables-lock") on node "test-preload-20210816220706-6487" DevicePath ""
	Aug 16 22:09:40 test-preload-20210816220706-6487 kubelet[6423]: I0816 22:09:40.079180    6423 reconciler.go:303] Volume detached for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/2dca3ccc-d899-4c38-ae23-c8c33768dd73-kube-proxy") on node "test-preload-20210816220706-6487" DevicePath ""
	Aug 16 22:09:40 test-preload-20210816220706-6487 kubelet[6423]: I0816 22:09:40.079187    6423 reconciler.go:303] Volume detached for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/2dca3ccc-d899-4c38-ae23-c8c33768dd73-lib-modules") on node "test-preload-20210816220706-6487" DevicePath ""
	Aug 16 22:09:40 test-preload-20210816220706-6487 kubelet[6423]: I0816 22:09:40.980714    6423 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-4gt5t" (UniqueName: "kubernetes.io/secret/4f4fb91c-1b09-46a7-9ea5-3cf07321a116-kube-proxy-token-4gt5t") pod "kube-proxy-n6mvk" (UID: "4f4fb91c-1b09-46a7-9ea5-3cf07321a116")
	Aug 16 22:09:40 test-preload-20210816220706-6487 kubelet[6423]: I0816 22:09:40.980756    6423 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/4f4fb91c-1b09-46a7-9ea5-3cf07321a116-xtables-lock") pod "kube-proxy-n6mvk" (UID: "4f4fb91c-1b09-46a7-9ea5-3cf07321a116")
	Aug 16 22:09:40 test-preload-20210816220706-6487 kubelet[6423]: I0816 22:09:40.980775    6423 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/4f4fb91c-1b09-46a7-9ea5-3cf07321a116-lib-modules") pod "kube-proxy-n6mvk" (UID: "4f4fb91c-1b09-46a7-9ea5-3cf07321a116")
	Aug 16 22:09:40 test-preload-20210816220706-6487 kubelet[6423]: I0816 22:09:40.980791    6423 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/4f4fb91c-1b09-46a7-9ea5-3cf07321a116-kube-proxy") pod "kube-proxy-n6mvk" (UID: "4f4fb91c-1b09-46a7-9ea5-3cf07321a116")
	Aug 16 22:10:08 test-preload-20210816220706-6487 kubelet[6423]: E0816 22:10:08.959139    6423 remote_runtime.go:295] ContainerStatus "0c8d37f573b799218bcae8d1f3b0d174ff59c27cca018c3a4f4ca6f93dfcdb98" from runtime service failed: rpc error: code = NotFound desc = could not find container "0c8d37f573b799218bcae8d1f3b0d174ff59c27cca018c3a4f4ca6f93dfcdb98": container with ID starting with 0c8d37f573b799218bcae8d1f3b0d174ff59c27cca018c3a4f4ca6f93dfcdb98 not found: ID does not exist
	
	* 
	* ==> storage-provisioner [1da8ca16721bb36d63748e336577a0fd07425325eb6ade463a8c4fb9e4debdd7] <==
	* I0816 22:08:32.034580       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 22:08:32.043166       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 22:08:32.043228       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 22:08:32.049188       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 22:08:32.049323       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_test-preload-20210816220706-6487_6d66a967-935c-4eb4-92ef-be67af84ca2b!
	I0816 22:08:32.049349       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"70c0790c-1283-4178-988d-9fe8e60e3cd9", APIVersion:"v1", ResourceVersion:"382", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' test-preload-20210816220706-6487_6d66a967-935c-4eb4-92ef-be67af84ca2b became leader
	I0816 22:08:32.149559       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_test-preload-20210816220706-6487_6d66a967-935c-4eb4-92ef-be67af84ca2b!
	
	* 
	* ==> storage-provisioner [6d2140556b3be0fdda2a98e2db43c7e0e2e528b47abc198c8321b20d9e7b081b] <==
	* I0816 22:09:26.717571       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 22:09:26.725130       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 22:09:26.725169       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 22:09:44.116604       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 22:09:44.116670       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"70c0790c-1283-4178-988d-9fe8e60e3cd9", APIVersion:"v1", ResourceVersion:"553", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' test-preload-20210816220706-6487_80abfd6f-6e2a-49ab-85ed-ae423588ab40 became leader
	I0816 22:09:44.116721       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_test-preload-20210816220706-6487_80abfd6f-6e2a-49ab-85ed-ae423588ab40!
	I0816 22:09:44.216947       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_test-preload-20210816220706-6487_80abfd6f-6e2a-49ab-85ed-ae423588ab40!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-20210816220706-6487 -n test-preload-20210816220706-6487
helpers_test.go:262: (dbg) Run:  kubectl --context test-preload-20210816220706-6487 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestPreload]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context test-preload-20210816220706-6487 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context test-preload-20210816220706-6487 describe pod : exit status 1 (47.701703ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context test-preload-20210816220706-6487 describe pod : exit status 1
helpers_test.go:176: Cleaning up "test-preload-20210816220706-6487" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20210816220706-6487
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20210816220706-6487: (2.841555806s)
--- FAIL: TestPreload (186.41s)

                                                
                                    
x
+
TestScheduledStopUnix (77.13s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20210816221012-6487 --memory=2048 --driver=docker  --container-runtime=crio
E0816 22:10:34.498416    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20210816221012-6487 --memory=2048 --driver=docker  --container-runtime=crio: (31.499479996s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210816221012-6487 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20210816221012-6487 -n scheduled-stop-20210816221012-6487
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210816221012-6487 --schedule 8s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210816221012-6487 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210816221012-6487 -n scheduled-stop-20210816221012-6487
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20210816221012-6487
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210816221012-6487 --schedule 5s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20210816221012-6487
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20210816221012-6487: exit status 3 (1.918734091s)

                                                
                                                
-- stdout --
	scheduled-stop-20210816221012-6487
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:11:24.143319  144488 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	E0816 22:11:24.143352  144488 status.go:258] status error: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port

                                                
                                                
** /stderr **
scheduled_stop_test.go:209: minikube status: exit status 3

                                                
                                                
-- stdout --
	scheduled-stop-20210816221012-6487
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:11:24.143319  144488 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	E0816 22:11:24.143352  144488 status.go:258] status error: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port

                                                
                                                
** /stderr **
panic.go:613: *** TestScheduledStopUnix FAILED at 2021-08-16 22:11:24.145172959 +0000 UTC m=+1826.571303661
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect scheduled-stop-20210816221012-6487
helpers_test.go:236: (dbg) docker inspect scheduled-stop-20210816221012-6487:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0fcc5ab1341ec27cfe99dd6c22b74b27d77dbc73657c86512dd3ef3624113fb9",
	        "Created": "2021-08-16T22:10:13.981327603Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 137,
	            "Error": "",
	            "StartedAt": "2021-08-16T22:10:14.413861797Z",
	            "FinishedAt": "2021-08-16T22:11:22.413966652Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/0fcc5ab1341ec27cfe99dd6c22b74b27d77dbc73657c86512dd3ef3624113fb9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0fcc5ab1341ec27cfe99dd6c22b74b27d77dbc73657c86512dd3ef3624113fb9/hostname",
	        "HostsPath": "/var/lib/docker/containers/0fcc5ab1341ec27cfe99dd6c22b74b27d77dbc73657c86512dd3ef3624113fb9/hosts",
	        "LogPath": "/var/lib/docker/containers/0fcc5ab1341ec27cfe99dd6c22b74b27d77dbc73657c86512dd3ef3624113fb9/0fcc5ab1341ec27cfe99dd6c22b74b27d77dbc73657c86512dd3ef3624113fb9-json.log",
	        "Name": "/scheduled-stop-20210816221012-6487",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-20210816221012-6487:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-20210816221012-6487",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b019bb9fe2b70e7cd1f5151991a0f112ae3c244fd9a83a4c6cd5ab3829bac19e-init/diff:/var/lib/docker/overlay2/26ccb05b45c39eb8fb9a222c35182ae0b2281818311893cc1c820d93f21711ae/diff:/var/lib/docker/overlay2/cd717b1332cc33945d2b9d777346faa34d6bdfa47023daf36a4373532eccf421/diff:/var/lib/docker/overlay2/d9b45e6cc99a52e8f81dfe312587e9103ceac6c138634c90ec0d647cd90dcdc6/diff:/var/lib/docker/overlay2/253a269adcaa7871d92fb351ceb4d32421104cd62f741ba1c63d8e3f2e19a0b5/diff:/var/lib/docker/overlay2/42adedc2a706910fdba591e62cbfab57dd03d85b882f6eef38c3e131a0c52bea/diff:/var/lib/docker/overlay2/77ed2008192a16e91da5c6b9ccc6e71cea338a7cb24d331dc695e5f74fb04573/diff:/var/lib/docker/overlay2/29dccf868a296ce0889c929159fd3ac0dcdc00ba6e2ef864b056335954ffab4d/diff:/var/lib/docker/overlay2/592818f0a9ab8c99c2438562d9c5a479083cb08f41ad30ddefa9b493a8098150/diff:/var/lib/docker/overlay2/05fa44df096fbdb60f935bef69e952369e10cd8f1adc570c170c1f08d09743f7/diff:/var/lib/docker/overlay2/65cf48
1c471f231a663edc6331272317a851b0fd3395b2d6be43899f9b5443f9/diff:/var/lib/docker/overlay2/86a49202a7a2abea1a7a57c52f2e19cf991e9319b11dcf7f4105892413e0096e/diff:/var/lib/docker/overlay2/6f1a2a1805e90af512eef163f484935d57b0a0a72a65b5775c38efb5fe937d13/diff:/var/lib/docker/overlay2/f514eb992d78367550d2ce1c020771f984eff047302d6af3dcd71172270c0087/diff:/var/lib/docker/overlay2/cdae383a0302a2ed835985028caad813d6e16c7caf0e146005b47a4a1e94369b/diff:/var/lib/docker/overlay2/1fe02987c4f18db439e026431c44d1b8b233da766c4dcd331a4c20fd91e88748/diff:/var/lib/docker/overlay2/28bb6b828668bb682f3523f9f71fbd429ef6180cd98455625ea014ef4e532b77/diff:/var/lib/docker/overlay2/2878ae01f9dccb219097c2fc5873be677fe8a70e2bd936d25f018483a3d6c1a3/diff:/var/lib/docker/overlay2/da86efa9bad2d8b46c88440b1c4864add391f7cc9ee07d4227dd37d76c9ffd85/diff:/var/lib/docker/overlay2/93bf17ee1f2be70c70aa38b3aea9daa3a62e189181d1aa56acc52f5cf90f7436/diff:/var/lib/docker/overlay2/e8db603cd7ef789cc310bf7782375ee624d80ac0bceb3a3075900e67540c152a/diff:/var/lib/d
ocker/overlay2/1ba1c50d72480f49b835b34db6c3b21bccdf6530a8201631099a7e287dbf94a6/diff:/var/lib/docker/overlay2/2eeb0aecf508bc4a797d240b330bfa5b54b84a085737a96dad9faf9ea129d4ce/diff:/var/lib/docker/overlay2/40ba0dc970df76cce520350594f86384e121b47a190106559752e8f7cf138540/diff:/var/lib/docker/overlay2/8b80e2d0e53fb136919fd9ca7d4d198a92c324a19aa1827fcd672a3aa49a0dcc/diff:/var/lib/docker/overlay2/bd1d10c9a8876242ad61e3841c439177c3ae3a1967e5b053729107676134c622/diff:/var/lib/docker/overlay2/1f65ee5880c0ce6e027f2e72118cb956fcdcba9bb0806d98de6532bcbdfac6dd/diff:/var/lib/docker/overlay2/8bde131df5f26fa8ad65d4ddcb9ddb84de567427ffeb4aee76487489e530ef77/diff:/var/lib/docker/overlay2/73a68cda04f65298e5bd2581f0f16a8de96cd42145968d5c39b0c8ae3228a423/diff:/var/lib/docker/overlay2/0acfb650c0dfd3b9df005fe48f305b155d61d21914271c258e95bf42636e9bf8/diff:/var/lib/docker/overlay2/33e44b42ef88ffe285cb67e240248a629167dd1c83362d51a67679cbd866c30b/diff:/var/lib/docker/overlay2/c71ddf9e0475e4a3987bf3d616723e9d055a79c4bb6d92cf34fc67be2d9
c5134/diff:/var/lib/docker/overlay2/faece61d8cf45bab42cd499befcdb8d4548229311b35600cced068a3ce186025/diff:/var/lib/docker/overlay2/5e45db6ab7620f8baf25ff5ff01c806cd64575cdc89e5bb2965a25800093d5f9/diff:/var/lib/docker/overlay2/7324fe47bf39776cd61539f09f63fbb6f5e17aedc985bcc48bc12cdaaff4c4e4/diff:/var/lib/docker/overlay2/98a50fa9fa33654b2ed4ce8f7118f04d7004260519abca0d403b2ab337e9c273/diff:/var/lib/docker/overlay2/0e38707c109f3f1c54c7190021d525f898a86b07f2e51c86dc3e6d916eff3ed7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b019bb9fe2b70e7cd1f5151991a0f112ae3c244fd9a83a4c6cd5ab3829bac19e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b019bb9fe2b70e7cd1f5151991a0f112ae3c244fd9a83a4c6cd5ab3829bac19e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b019bb9fe2b70e7cd1f5151991a0f112ae3c244fd9a83a4c6cd5ab3829bac19e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-20210816221012-6487",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-20210816221012-6487/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-20210816221012-6487",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-20210816221012-6487",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-20210816221012-6487",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3a1b11e4cb6f753faad244ae842a62fdc4c141015396b20e6b03b97a7eec734a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/3a1b11e4cb6f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-20210816221012-6487": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0fcc5ab1341e"
	                    ],
	                    "NetworkID": "4db4189b4446a18899ca5fc4428d24990cb06198169be886a8559a9c64d97724",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210816221012-6487 -n scheduled-stop-20210816221012-6487
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210816221012-6487 -n scheduled-stop-20210816221012-6487: exit status 7 (90.929907ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "scheduled-stop-20210816221012-6487" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:176: Cleaning up "scheduled-stop-20210816221012-6487" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20210816221012-6487
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20210816221012-6487: (5.366594089s)
--- FAIL: TestScheduledStopUnix (77.13s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (149.15s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.9.0.389102141.exe start -p running-upgrade-20210816221326-6487 --memory=2200 --vm-driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Done: /tmp/minikube-v1.9.0.389102141.exe start -p running-upgrade-20210816221326-6487 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m27.464972249s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20210816221326-6487 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:138: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-20210816221326-6487 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (58.313967963s)

                                                
                                                
-- stdout --
	* [running-upgrade-20210816221326-6487] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	  - MINIKUBE_LOCATION=12230
	* Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node running-upgrade-20210816221326-6487 in cluster running-upgrade-20210816221326-6487
	* Pulling base image ...
	* Updating the running docker "running-upgrade-20210816221326-6487" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 22:14:54.336114  193631 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:14:54.336203  193631 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:14:54.336212  193631 out.go:311] Setting ErrFile to fd 2...
	I0816 22:14:54.336217  193631 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:14:54.336359  193631 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:14:54.336606  193631 out.go:305] Setting JSON to false
	I0816 22:14:54.374797  193631 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-13","uptime":3261,"bootTime":1629148833,"procs":275,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:14:54.374902  193631 start.go:121] virtualization: kvm guest
	I0816 22:14:54.376643  193631 out.go:177] * [running-upgrade-20210816221326-6487] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:14:54.378260  193631 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:14:54.376800  193631 notify.go:169] Checking for updates...
	I0816 22:14:54.379643  193631 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:14:54.381006  193631 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:14:54.382420  193631 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:14:54.383799  193631 config.go:177] Loaded profile config "running-upgrade-20210816221326-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0816 22:14:54.383819  193631 start_flags.go:521] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0816 22:14:54.385703  193631 out.go:177] * Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	I0816 22:14:54.385743  193631 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:14:54.437636  193631 docker.go:132] docker version: linux-19.03.15
	I0816 22:14:54.437745  193631 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:14:54.520133  193631 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:51 SystemTime:2021-08-16 22:14:54.474501137 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 22:14:54.520243  193631 docker.go:244] overlay module found
	I0816 22:14:54.522410  193631 out.go:177] * Using the docker driver based on existing profile
	I0816 22:14:54.522435  193631 start.go:278] selected driver: docker
	I0816 22:14:54.522442  193631 start.go:751] validating driver "docker" against &{Name:running-upgrade-20210816221326-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-20210816221326-6487 Namespace: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:14:54.522552  193631 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0816 22:14:54.522600  193631 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:14:54.522635  193631 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0816 22:14:54.524177  193631 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:14:54.525044  193631 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:14:54.605177  193631 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:51 SystemTime:2021-08-16 22:14:54.561500087 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0816 22:14:54.605301  193631 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:14:54.605324  193631 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0816 22:14:54.607374  193631 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:14:54.607469  193631 cni.go:93] Creating CNI manager for ""
	I0816 22:14:54.607486  193631 cni.go:142] EnableDefaultCNI is true, recommending bridge
	I0816 22:14:54.607495  193631 start_flags.go:277] config:
	{Name:running-upgrade-20210816221326-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-20210816221326-6487 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:14:54.609380  193631 out.go:177] * Starting control plane node running-upgrade-20210816221326-6487 in cluster running-upgrade-20210816221326-6487
	I0816 22:14:54.609438  193631 cache.go:117] Beginning downloading kic base image for docker with crio
	I0816 22:14:54.611859  193631 out.go:177] * Pulling base image ...
	I0816 22:14:54.611899  193631 preload.go:131] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0816 22:14:54.612021  193631 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	W0816 22:14:54.645951  193631 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0816 22:14:54.646174  193631 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/running-upgrade-20210816221326-6487/config.json ...
	I0816 22:14:54.646506  193631 cache.go:108] acquiring lock: {Name:mke3d64dcf3270420cc281e6a6befd30594c50fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:14:54.646692  193631 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0816 22:14:54.646714  193631 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 218.314µs
	I0816 22:14:54.646730  193631 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0816 22:14:54.646750  193631 cache.go:108] acquiring lock: {Name:mk957eac474c5c8305eacffde7f99a20bba586e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:14:54.646814  193631 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 exists
	I0816 22:14:54.646829  193631 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0" took 82.794µs
	I0816 22:14:54.646843  193631 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 succeeded
	I0816 22:14:54.646859  193631 cache.go:108] acquiring lock: {Name:mk1fecffd141ca028e99cc131edfa7a01bcd03c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:14:54.646911  193631 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 exists
	I0816 22:14:54.646924  193631 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0" took 68.828µs
	I0816 22:14:54.646938  193631 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 succeeded
	I0816 22:14:54.646964  193631 cache.go:108] acquiring lock: {Name:mk07bc43ca8ee5ab80f50aa1c427556bca23f344 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:14:54.647036  193631 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 exists
	I0816 22:14:54.647050  193631 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0" took 87.464µs
	I0816 22:14:54.647071  193631 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 succeeded
	I0816 22:14:54.647089  193631 cache.go:108] acquiring lock: {Name:mkd2db5e33a1b02cf93b9968c82d95627623f106 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:14:54.647137  193631 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 exists
	I0816 22:14:54.647150  193631 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0" took 67.751µs
	I0816 22:14:54.647162  193631 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 succeeded
	I0816 22:14:54.647176  193631 cache.go:108] acquiring lock: {Name:mk25aee23b4a67efee2d17c252b431b3094596c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:14:54.647215  193631 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.2 exists
	I0816 22:14:54.647225  193631 cache.go:97] cache image "k8s.gcr.io/pause:3.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.2" took 51.97µs
	I0816 22:14:54.647235  193631 cache.go:81] save to tar file k8s.gcr.io/pause:3.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.2 succeeded
	I0816 22:14:54.647247  193631 cache.go:108] acquiring lock: {Name:mkd82ea648b841d96f18b36063bee48717854ca6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:14:54.647290  193631 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 exists
	I0816 22:14:54.647306  193631 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0" took 60.502µs
	I0816 22:14:54.647319  193631 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 succeeded
	I0816 22:14:54.647332  193631 cache.go:108] acquiring lock: {Name:mk97f9b290671a75f18e23a9fd77b57386ea84e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:14:54.647380  193631 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 exists
	I0816 22:14:54.647393  193631 cache.go:97] cache image "k8s.gcr.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7" took 63.858µs
	I0816 22:14:54.647405  193631 cache.go:81] save to tar file k8s.gcr.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 succeeded
	I0816 22:14:54.647421  193631 cache.go:108] acquiring lock: {Name:mkd757956ba096c9c6c2faef405bc87f0df51e15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:14:54.647473  193631 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0816 22:14:54.647486  193631 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 71.144µs
	I0816 22:14:54.647500  193631 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0816 22:14:54.647512  193631 cache.go:108] acquiring lock: {Name:mk0b84fbea34d74cc2da16fdbda169da7718e6bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:14:54.647557  193631 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0816 22:14:54.647571  193631 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 60.556µs
	I0816 22:14:54.647584  193631 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0816 22:14:54.647591  193631 cache.go:88] Successfully saved all images to host disk.
	I0816 22:14:54.714427  193631 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0816 22:14:54.714458  193631 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0816 22:14:54.714476  193631 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:14:54.714513  193631 start.go:313] acquiring machines lock for running-upgrade-20210816221326-6487: {Name:mk460fe8a1515a1b6a4cdb8d596abf0c54b89832 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:14:54.715438  193631 start.go:317] acquired machines lock for "running-upgrade-20210816221326-6487" in 902.105µs
	I0816 22:14:54.715467  193631 start.go:93] Skipping create...Using existing machine configuration
	I0816 22:14:54.715474  193631 fix.go:55] fixHost starting: m01
	I0816 22:14:54.715741  193631 cli_runner.go:115] Run: docker container inspect running-upgrade-20210816221326-6487 --format={{.State.Status}}
	I0816 22:14:54.765839  193631 fix.go:108] recreateIfNeeded on running-upgrade-20210816221326-6487: state=Running err=<nil>
	W0816 22:14:54.765904  193631 fix.go:134] unexpected machine state, will restart: <nil>
	I0816 22:14:54.768636  193631 out.go:177] * Updating the running docker "running-upgrade-20210816221326-6487" container ...
	I0816 22:14:54.768680  193631 machine.go:88] provisioning docker machine ...
	I0816 22:14:54.768704  193631 ubuntu.go:169] provisioning hostname "running-upgrade-20210816221326-6487"
	I0816 22:14:54.768774  193631 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210816221326-6487
	I0816 22:14:54.810740  193631 main.go:130] libmachine: Using SSH client type: native
	I0816 22:14:54.810987  193631 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32896 <nil> <nil>}
	I0816 22:14:54.811003  193631 main.go:130] libmachine: About to run SSH command:
	sudo hostname running-upgrade-20210816221326-6487 && echo "running-upgrade-20210816221326-6487" | sudo tee /etc/hostname
	I0816 22:14:54.968415  193631 main.go:130] libmachine: SSH cmd err, output: <nil>: running-upgrade-20210816221326-6487
	
	I0816 22:14:54.968495  193631 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210816221326-6487
	I0816 22:14:55.013696  193631 main.go:130] libmachine: Using SSH client type: native
	I0816 22:14:55.013854  193631 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32896 <nil> <nil>}
	I0816 22:14:55.013883  193631 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-20210816221326-6487' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-20210816221326-6487/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-20210816221326-6487' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:14:55.120007  193631 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:14:55.120051  193631 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem ServerCertRemotePath:/e
tc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:14:55.120079  193631 ubuntu.go:177] setting up certificates
	I0816 22:14:55.120093  193631 provision.go:83] configureAuth start
	I0816 22:14:55.120147  193631 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-20210816221326-6487
	I0816 22:14:55.161367  193631 provision.go:138] copyHostCerts
	I0816 22:14:55.161447  193631 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:14:55.161461  193631 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:14:55.161515  193631 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:14:55.161626  193631 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:14:55.161641  193631 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:14:55.161665  193631 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0816 22:14:55.161743  193631 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:14:55.161754  193631 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:14:55.161776  193631 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:14:55.161902  193631 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-20210816221326-6487 san=[172.17.0.3 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-20210816221326-6487]
	I0816 22:14:55.380488  193631 provision.go:172] copyRemoteCerts
	I0816 22:14:55.380555  193631 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:14:55.380593  193631 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210816221326-6487
	I0816 22:14:55.429382  193631 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32896 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/running-upgrade-20210816221326-6487/id_rsa Username:docker}
	I0816 22:14:55.515624  193631 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:14:55.535553  193631 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0816 22:14:55.559556  193631 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 22:14:55.584023  193631 provision.go:86] duration metric: configureAuth took 463.917825ms
	I0816 22:14:55.584053  193631 ubuntu.go:193] setting minikube options for container-runtime
	I0816 22:14:55.584201  193631 config.go:177] Loaded profile config "running-upgrade-20210816221326-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0816 22:14:55.584339  193631 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210816221326-6487
	I0816 22:14:55.636352  193631 main.go:130] libmachine: Using SSH client type: native
	I0816 22:14:55.636539  193631 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32896 <nil> <nil>}
	I0816 22:14:55.636559  193631 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 22:14:56.053256  193631 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 22:14:56.053287  193631 machine.go:91] provisioned docker machine in 1.284599272s
	I0816 22:14:56.053299  193631 start.go:267] post-start starting for "running-upgrade-20210816221326-6487" (driver="docker")
	I0816 22:14:56.053318  193631 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:14:56.053395  193631 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:14:56.053445  193631 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210816221326-6487
	I0816 22:14:56.095605  193631 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32896 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/running-upgrade-20210816221326-6487/id_rsa Username:docker}
	I0816 22:14:56.179104  193631 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:14:56.182322  193631 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0816 22:14:56.182352  193631 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0816 22:14:56.182363  193631 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0816 22:14:56.182369  193631 info.go:137] Remote host: Ubuntu 19.10
	I0816 22:14:56.182378  193631 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:14:56.182431  193631 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:14:56.182529  193631 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem -> 64872.pem in /etc/ssl/certs
	I0816 22:14:56.182687  193631 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:14:56.188852  193631 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:14:56.205022  193631 start.go:270] post-start completed in 151.70768ms
	I0816 22:14:56.205086  193631 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 22:14:56.205141  193631 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210816221326-6487
	I0816 22:14:56.250430  193631 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32896 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/running-upgrade-20210816221326-6487/id_rsa Username:docker}
	I0816 22:14:56.328354  193631 fix.go:57] fixHost completed within 1.612874359s
	I0816 22:14:56.328387  193631 start.go:80] releasing machines lock for "running-upgrade-20210816221326-6487", held for 1.612931393s
	I0816 22:14:56.328472  193631 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-20210816221326-6487
	I0816 22:14:56.372155  193631 ssh_runner.go:149] Run: systemctl --version
	I0816 22:14:56.372206  193631 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:14:56.372217  193631 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210816221326-6487
	I0816 22:14:56.372274  193631 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210816221326-6487
	I0816 22:14:56.416886  193631 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32896 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/running-upgrade-20210816221326-6487/id_rsa Username:docker}
	I0816 22:14:56.418160  193631 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32896 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/running-upgrade-20210816221326-6487/id_rsa Username:docker}
	I0816 22:14:56.491786  193631 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0816 22:14:56.527508  193631 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0816 22:14:56.537354  193631 docker.go:153] disabling docker service ...
	I0816 22:14:56.537408  193631 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:14:56.546661  193631 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:14:56.555082  193631 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:14:56.607363  193631 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:14:56.664433  193631 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:14:56.672722  193631 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:14:56.684227  193631 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.2"|' -i /etc/crio/crio.conf"
	I0816 22:14:56.691709  193631 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:14:56.698250  193631 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:14:56.698289  193631 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:14:56.704692  193631 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:14:56.710913  193631 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:14:56.770752  193631 ssh_runner.go:149] Run: sudo systemctl start crio
	I0816 22:14:56.843265  193631 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 22:14:56.843336  193631 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:14:56.846493  193631 retry.go:31] will retry after 1.104660288s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0816 22:14:57.952269  193631 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:14:57.956118  193631 retry.go:31] will retry after 2.160763633s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0816 22:15:00.117962  193631 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:15:00.121338  193631 retry.go:31] will retry after 2.62026012s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0816 22:15:02.744038  193631 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:15:02.747434  193631 retry.go:31] will retry after 3.164785382s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0816 22:15:05.913253  193631 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:15:05.916692  193631 retry.go:31] will retry after 4.680977329s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0816 22:15:10.598822  193631 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:15:10.602078  193631 retry.go:31] will retry after 9.01243771s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0816 22:15:19.615461  193631 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:15:19.619107  193631 retry.go:31] will retry after 6.442959172s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0816 22:15:26.063267  193631 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:15:26.066573  193631 retry.go:31] will retry after 11.217246954s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0816 22:15:37.284010  193631 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:15:37.287626  193631 retry.go:31] will retry after 15.299675834s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0816 22:15:52.587998  193631 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:15:52.593073  193631 out.go:177] 
	W0816 22:15:52.593187  193631 out.go:242] X Exiting due to RUNTIME_ENABLE: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	
	X Exiting due to RUNTIME_ENABLE: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	
	W0816 22:15:52.593200  193631 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0816 22:15:52.594968  193631 out.go:242] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                         │
	│                                                                                                                                                       │
	│    * Please attach the following file to the GitHub issue:                                                                                            │
	│    * - /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                         │
	│                                                                                                                                                       │
	│    * Please attach the following file to the GitHub issue:                                                                                            │
	│    * - /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 22:15:52.596490  193631 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:140: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-20210816221326-6487 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:613: *** TestRunningBinaryUpgrade FAILED at 2021-08-16 22:15:52.609987823 +0000 UTC m=+2095.036118544
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect running-upgrade-20210816221326-6487
helpers_test.go:236: (dbg) docker inspect running-upgrade-20210816221326-6487:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e5ee6d9ad456b902ae418b174254b5b2f47e44d60cc11056b4298d48561fb4f",
	        "Created": "2021-08-16T22:13:27.502231657Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 171627,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-16T22:13:28.014316902Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/5e5ee6d9ad456b902ae418b174254b5b2f47e44d60cc11056b4298d48561fb4f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e5ee6d9ad456b902ae418b174254b5b2f47e44d60cc11056b4298d48561fb4f/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e5ee6d9ad456b902ae418b174254b5b2f47e44d60cc11056b4298d48561fb4f/hosts",
	        "LogPath": "/var/lib/docker/containers/5e5ee6d9ad456b902ae418b174254b5b2f47e44d60cc11056b4298d48561fb4f/5e5ee6d9ad456b902ae418b174254b5b2f47e44d60cc11056b4298d48561fb4f-json.log",
	        "Name": "/running-upgrade-20210816221326-6487",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-20210816221326-6487:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/65c68985dfbedab44c8ea0d2047d96264139a10a2ca2d7a5ca30dbbb9c701798-init/diff:/var/lib/docker/overlay2/de6af85d43ab6de82a80599c78c852ce945860493e987ae8d4747813e3e12e71/diff:/var/lib/docker/overlay2/1463f2b27e2cf184f9e8a7e127a3f6ecaa9eb4e8c586d13eb98ef0034f418eca/diff:/var/lib/docker/overlay2/6fae380631f93f264fc69450c6bd514661e47e2e598e586796b4ef5487d2609b/diff:/var/lib/docker/overlay2/9455405085a27b776dbc930a9422413a8738ee14a396dba1428ad3477dd78d19/diff:/var/lib/docker/overlay2/872cbd16ad0ea1d1a8643af87081f3ffd14a4cc7bb05e0117ff9630a1e4c2d63/diff:/var/lib/docker/overlay2/1cfe85b8b9110dde1cfd7cd18efd634d01d4c6b46da62d17a26da23aa02686be/diff:/var/lib/docker/overlay2/189b625246c097ae32fa419f11770e2e28b30b39afd65b82dc25c55530584d10/diff:/var/lib/docker/overlay2/f5b5179d9c5187ae940c59c3a026ef190561c0532770dbd761fecfc6251ebc05/diff:/var/lib/docker/overlay2/116a802d8be0890169902c8fcb2ad1b64b5391fa1a060c1f02d344668cf1e40f/diff:/var/lib/docker/overlay2/d335f4
f8874ac51d7120bb297af4bf45b5ab1c41f3977cabfa2149948695c6e9/diff:/var/lib/docker/overlay2/cfc70be91e8c4eaba2033239d05c70abdaaae7922eebe0a9694302cde2259694/diff:/var/lib/docker/overlay2/901fced2d4ec35a47265e02248dd5ae2f3130431109d25e604d2ab568d1bde04/diff:/var/lib/docker/overlay2/7aa7e86939390a956567b669d4bab83fb60927bb30f5a9803342e0d68bd3e23f/diff:/var/lib/docker/overlay2/a482a71267c1aded8aadff398336811f3437dec13bdea6065ac47ad1eb5eed5f/diff:/var/lib/docker/overlay2/972f22e2510a2c07193729807506aedac3ec49bb2063b2b7c3e443b7380a91c5/diff:/var/lib/docker/overlay2/8c845952b97a856c0093d30bbe000f51feda3cb8d3a525e83d8633d5af175938/diff:/var/lib/docker/overlay2/85f0f897ba04db0a863dd2628b8b2e7d3539cecbb6acd1530907b350763c6550/diff:/var/lib/docker/overlay2/f4060f75e85c12bf3ba15020ed3c17665bed2409afc88787b2341c6d5af01040/diff:/var/lib/docker/overlay2/7fa8f93d5ee1866f01fa7288d688713da7f1044a1942eb59534b94cb95cc3d74/diff:/var/lib/docker/overlay2/0d91418cf4c9ce3175fcb432fd443e696caae83859f6d5e10cdfaf102243e189/diff:/var/lib/d
ocker/overlay2/f4f812cd2dd5b0b125eea4bff29d3ed0d34fa877c492159a8b8b6aee1f536d4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/65c68985dfbedab44c8ea0d2047d96264139a10a2ca2d7a5ca30dbbb9c701798/merged",
	                "UpperDir": "/var/lib/docker/overlay2/65c68985dfbedab44c8ea0d2047d96264139a10a2ca2d7a5ca30dbbb9c701798/diff",
	                "WorkDir": "/var/lib/docker/overlay2/65c68985dfbedab44c8ea0d2047d96264139a10a2ca2d7a5ca30dbbb9c701798/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-20210816221326-6487",
	                "Source": "/var/lib/docker/volumes/running-upgrade-20210816221326-6487/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-20210816221326-6487",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-20210816221326-6487",
	                "name.minikube.sigs.k8s.io": "running-upgrade-20210816221326-6487",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eb49993b21904586062bf5f1f2d6f84f296c97a366c6b0f2139368c850957b18",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32896"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32895"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32894"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/eb49993b2190",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "1508b852e02e8c668b4b9173adf96a290e81bd545579699aa99628b328026e24",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.3",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:03",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "cebfb7616e1650d9168a8dbc1fe230ddb125e95fadce7ade8e6cad512b16f560",
	                    "EndpointID": "1508b852e02e8c668b4b9173adf96a290e81bd545579699aa99628b328026e24",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.3",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:03",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-20210816221326-6487 -n running-upgrade-20210816221326-6487
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-20210816221326-6487 -n running-upgrade-20210816221326-6487: exit status 4 (294.945654ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:15:52.920376  204451 status.go:413] kubeconfig endpoint: extract IP: "running-upgrade-20210816221326-6487" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 4 (may be ok)
helpers_test.go:242: "running-upgrade-20210816221326-6487" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:176: Cleaning up "running-upgrade-20210816221326-6487" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20210816221326-6487
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20210816221326-6487: (2.71575103s)
--- FAIL: TestRunningBinaryUpgrade (149.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade (183.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade
=== PAUSE TestStoppedBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:186: (dbg) Run:  /tmp/minikube-v1.9.0.738908307.exe start -p stopped-upgrade-20210816221221-6487 --memory=2200 --vm-driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:186: (dbg) Done: /tmp/minikube-v1.9.0.738908307.exe start -p stopped-upgrade-20210816221221-6487 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m38.400085959s)
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.9.0.738908307.exe -p stopped-upgrade-20210816221221-6487 stop
E0816 22:14:11.451881    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/client.crt: no such file or directory
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.9.0.738908307.exe -p stopped-upgrade-20210816221221-6487 stop: (11.400801812s)
version_upgrade_test.go:201: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20210816221221-6487 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0816 22:14:43.702571    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816215050-6487/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:201: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-20210816221221-6487 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (1m10.446745196s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20210816221221-6487] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	  - MINIKUBE_LOCATION=12230
	* Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node stopped-upgrade-20210816221221-6487 in cluster stopped-upgrade-20210816221221-6487
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-20210816221221-6487" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 22:14:12.207242  186299 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:14:12.207433  186299 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:14:12.207443  186299 out.go:311] Setting ErrFile to fd 2...
	I0816 22:14:12.207447  186299 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:14:12.207570  186299 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:14:12.207813  186299 out.go:305] Setting JSON to false
	I0816 22:14:12.300934  186299 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-13","uptime":3219,"bootTime":1629148833,"procs":264,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:14:12.301057  186299 start.go:121] virtualization: kvm guest
	I0816 22:14:12.304148  186299 out.go:177] * [stopped-upgrade-20210816221221-6487] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:14:12.304312  186299 notify.go:169] Checking for updates...
	I0816 22:14:12.306433  186299 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:14:12.308085  186299 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:14:12.309458  186299 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:14:12.310892  186299 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:14:12.311362  186299 config.go:177] Loaded profile config "stopped-upgrade-20210816221221-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0816 22:14:12.311383  186299 start_flags.go:521] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0816 22:14:12.313451  186299 out.go:177] * Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	I0816 22:14:12.313497  186299 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:14:12.419986  186299 docker.go:132] docker version: linux-19.03.15
	I0816 22:14:12.420086  186299 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:14:12.588478  186299 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:true NGoroutines:77 SystemTime:2021-08-16 22:14:12.488106223 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 22:14:12.588586  186299 docker.go:244] overlay module found
	I0816 22:14:12.590406  186299 out.go:177] * Using the docker driver based on existing profile
	I0816 22:14:12.590437  186299 start.go:278] selected driver: docker
	I0816 22:14:12.590445  186299 start.go:751] validating driver "docker" against &{Name:stopped-upgrade-20210816221221-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-20210816221221-6487 Namespace: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:14:12.590570  186299 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0816 22:14:12.590632  186299 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:14:12.590655  186299 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0816 22:14:12.592221  186299 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:14:12.593200  186299 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:14:12.752141  186299 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:true NGoroutines:77 SystemTime:2021-08-16 22:14:12.67091469 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0816 22:14:12.752286  186299 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:14:12.752314  186299 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0816 22:14:12.759927  186299 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:14:12.760018  186299 cni.go:93] Creating CNI manager for ""
	I0816 22:14:12.760041  186299 cni.go:142] EnableDefaultCNI is true, recommending bridge
	I0816 22:14:12.760052  186299 start_flags.go:277] config:
	{Name:stopped-upgrade-20210816221221-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-20210816221221-6487 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:14:12.761873  186299 out.go:177] * Starting control plane node stopped-upgrade-20210816221221-6487 in cluster stopped-upgrade-20210816221221-6487
	I0816 22:14:12.761923  186299 cache.go:117] Beginning downloading kic base image for docker with crio
	I0816 22:14:12.763943  186299 out.go:177] * Pulling base image ...
	I0816 22:14:12.763966  186299 preload.go:131] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0816 22:14:12.764126  186299 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	W0816 22:14:12.809757  186299 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0816 22:14:12.809951  186299 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/stopped-upgrade-20210816221221-6487/config.json ...
	I0816 22:14:12.810300  186299 cache.go:108] acquiring lock: {Name:mke3d64dcf3270420cc281e6a6befd30594c50fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:14:12.810452  186299 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0816 22:14:12.810470  186299 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 183.022µs
	I0816 22:14:12.810452  186299 cache.go:108] acquiring lock: {Name:mk25aee23b4a67efee2d17c252b431b3094596c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:14:12.810483  186299 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0816 22:14:12.810519  186299 cache.go:108] acquiring lock: {Name:mkd82ea648b841d96f18b36063bee48717854ca6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:14:12.810561  186299 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.2 exists
	I0816 22:14:12.810578  186299 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 exists
	I0816 22:14:12.810590  186299 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0" took 74.549µs
	I0816 22:14:12.810602  186299 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 succeeded
	I0816 22:14:12.810590  186299 cache.go:97] cache image "k8s.gcr.io/pause:3.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.2" took 157.308µs
	I0816 22:14:12.810613  186299 cache.go:81] save to tar file k8s.gcr.io/pause:3.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.2 succeeded
	I0816 22:14:12.810616  186299 cache.go:108] acquiring lock: {Name:mk97f9b290671a75f18e23a9fd77b57386ea84e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:14:12.810628  186299 cache.go:108] acquiring lock: {Name:mk957eac474c5c8305eacffde7f99a20bba586e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:14:12.810664  186299 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 exists
	I0816 22:14:12.810674  186299 cache.go:97] cache image "k8s.gcr.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7" took 61.15µs
	I0816 22:14:12.810681  186299 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 exists
	I0816 22:14:12.810684  186299 cache.go:81] save to tar file k8s.gcr.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 succeeded
	I0816 22:14:12.810694  186299 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0" took 66.667µs
	I0816 22:14:12.810707  186299 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 succeeded
	I0816 22:14:12.810698  186299 cache.go:108] acquiring lock: {Name:mkd757956ba096c9c6c2faef405bc87f0df51e15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:14:12.810722  186299 cache.go:108] acquiring lock: {Name:mk1fecffd141ca028e99cc131edfa7a01bcd03c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:14:12.810746  186299 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0816 22:14:12.810756  186299 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 60.816µs
	I0816 22:14:12.810768  186299 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0816 22:14:12.810769  186299 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 exists
	I0816 22:14:12.810780  186299 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0" took 60.159µs
	I0816 22:14:12.810791  186299 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 succeeded
	I0816 22:14:12.810784  186299 cache.go:108] acquiring lock: {Name:mk0b84fbea34d74cc2da16fdbda169da7718e6bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:14:12.810808  186299 cache.go:108] acquiring lock: {Name:mk07bc43ca8ee5ab80f50aa1c427556bca23f344 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:14:12.810831  186299 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0816 22:14:12.810841  186299 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 59.547µs
	I0816 22:14:12.810852  186299 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0816 22:14:12.810862  186299 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 exists
	I0816 22:14:12.810866  186299 cache.go:108] acquiring lock: {Name:mkd2db5e33a1b02cf93b9968c82d95627623f106 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:14:12.810873  186299 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0" took 66.593µs
	I0816 22:14:12.810887  186299 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 succeeded
	I0816 22:14:12.810910  186299 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 exists
	I0816 22:14:12.810922  186299 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0" took 57.835µs
	I0816 22:14:12.810933  186299 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 succeeded
	I0816 22:14:12.810941  186299 cache.go:88] Successfully saved all images to host disk.
	I0816 22:14:12.930794  186299 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0816 22:14:12.930823  186299 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0816 22:14:12.930839  186299 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:14:12.930872  186299 start.go:313] acquiring machines lock for stopped-upgrade-20210816221221-6487: {Name:mk6d0af1377e4c2167d0fc66bf33dbf78fe55483 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:14:12.930986  186299 start.go:317] acquired machines lock for "stopped-upgrade-20210816221221-6487" in 94.943µs
	I0816 22:14:12.931006  186299 start.go:93] Skipping create...Using existing machine configuration
	I0816 22:14:12.931012  186299 fix.go:55] fixHost starting: m01
	I0816 22:14:12.931323  186299 cli_runner.go:115] Run: docker container inspect stopped-upgrade-20210816221221-6487 --format={{.State.Status}}
	I0816 22:14:13.006966  186299 fix.go:108] recreateIfNeeded on stopped-upgrade-20210816221221-6487: state=Stopped err=<nil>
	W0816 22:14:13.007008  186299 fix.go:134] unexpected machine state, will restart: <nil>
	I0816 22:14:13.009838  186299 out.go:177] * Restarting existing docker container for "stopped-upgrade-20210816221221-6487" ...
	I0816 22:14:13.009907  186299 cli_runner.go:115] Run: docker start stopped-upgrade-20210816221221-6487
	I0816 22:14:16.400233  186299 cli_runner.go:168] Completed: docker start stopped-upgrade-20210816221221-6487: (3.390297893s)
	I0816 22:14:16.400339  186299 cli_runner.go:115] Run: docker container inspect stopped-upgrade-20210816221221-6487 --format={{.State.Status}}
	I0816 22:14:16.438531  186299 kic.go:420] container "stopped-upgrade-20210816221221-6487" state is running.
	I0816 22:14:17.297162  186299 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-20210816221221-6487
	I0816 22:14:21.475413  186299 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/stopped-upgrade-20210816221221-6487/config.json ...
	I0816 22:14:21.478740  186299 machine.go:88] provisioning docker machine ...
	I0816 22:14:21.478783  186299 ubuntu.go:169] provisioning hostname "stopped-upgrade-20210816221221-6487"
	I0816 22:14:21.478841  186299 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210816221221-6487
	I0816 22:14:21.520891  186299 main.go:130] libmachine: Using SSH client type: native
	I0816 22:14:21.521062  186299 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32909 <nil> <nil>}
	I0816 22:14:21.521078  186299 main.go:130] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-20210816221221-6487 && echo "stopped-upgrade-20210816221221-6487" | sudo tee /etc/hostname
	I0816 22:14:21.521760  186299 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38576->127.0.0.1:32909: read: connection reset by peer
	I0816 22:14:24.635628  186299 main.go:130] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-20210816221221-6487
	
	I0816 22:14:24.635717  186299 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210816221221-6487
	I0816 22:14:24.681434  186299 main.go:130] libmachine: Using SSH client type: native
	I0816 22:14:24.681578  186299 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32909 <nil> <nil>}
	I0816 22:14:24.681596  186299 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-20210816221221-6487' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-20210816221221-6487/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-20210816221221-6487' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:14:24.787923  186299 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:14:24.787955  186299 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem ServerCertRemotePath:/e
tc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:14:24.787996  186299 ubuntu.go:177] setting up certificates
	I0816 22:14:24.788007  186299 provision.go:83] configureAuth start
	I0816 22:14:24.788050  186299 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-20210816221221-6487
	I0816 22:14:24.842981  186299 provision.go:138] copyHostCerts
	I0816 22:14:24.843044  186299 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:14:24.843056  186299 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:14:24.843116  186299 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0816 22:14:24.843205  186299 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:14:24.843216  186299 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:14:24.843242  186299 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:14:24.843306  186299 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:14:24.843315  186299 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:14:24.843341  186299 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:14:24.843390  186299 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-20210816221221-6487 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-20210816221221-6487]
	I0816 22:14:24.973781  186299 provision.go:172] copyRemoteCerts
	I0816 22:14:24.973830  186299 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:14:24.973865  186299 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210816221221-6487
	I0816 22:14:25.011931  186299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/stopped-upgrade-20210816221221-6487/id_rsa Username:docker}
	I0816 22:14:25.112828  186299 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:14:25.128269  186299 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0816 22:14:25.143008  186299 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 22:14:25.157990  186299 provision.go:86] duration metric: configureAuth took 369.972614ms
	I0816 22:14:25.158012  186299 ubuntu.go:193] setting minikube options for container-runtime
	I0816 22:14:25.158187  186299 config.go:177] Loaded profile config "stopped-upgrade-20210816221221-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0816 22:14:25.158326  186299 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210816221221-6487
	I0816 22:14:25.199856  186299 main.go:130] libmachine: Using SSH client type: native
	I0816 22:14:25.200075  186299 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32909 <nil> <nil>}
	I0816 22:14:25.200095  186299 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 22:14:25.905459  186299 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 22:14:25.905492  186299 machine.go:91] provisioned docker machine in 4.426728444s
	I0816 22:14:25.905505  186299 start.go:267] post-start starting for "stopped-upgrade-20210816221221-6487" (driver="docker")
	I0816 22:14:25.905513  186299 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:14:25.905572  186299 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:14:25.905619  186299 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210816221221-6487
	I0816 22:14:25.957991  186299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/stopped-upgrade-20210816221221-6487/id_rsa Username:docker}
	I0816 22:14:26.039844  186299 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:14:26.042840  186299 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0816 22:14:26.042866  186299 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0816 22:14:26.042879  186299 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0816 22:14:26.042887  186299 info.go:137] Remote host: Ubuntu 19.10
	I0816 22:14:26.042900  186299 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:14:26.042956  186299 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:14:26.043046  186299 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem -> 64872.pem in /etc/ssl/certs
	I0816 22:14:26.043157  186299 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:14:26.050359  186299 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:14:26.067849  186299 start.go:270] post-start completed in 162.330584ms
	I0816 22:14:26.067919  186299 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 22:14:26.067961  186299 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210816221221-6487
	I0816 22:14:26.130327  186299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/stopped-upgrade-20210816221221-6487/id_rsa Username:docker}
	I0816 22:14:26.220332  186299 fix.go:57] fixHost completed within 13.289314639s
	I0816 22:14:26.220360  186299 start.go:80] releasing machines lock for "stopped-upgrade-20210816221221-6487", held for 13.289363334s
	I0816 22:14:26.220443  186299 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-20210816221221-6487
	I0816 22:14:26.274101  186299 ssh_runner.go:149] Run: systemctl --version
	I0816 22:14:26.274160  186299 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210816221221-6487
	I0816 22:14:26.274184  186299 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:14:26.274267  186299 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210816221221-6487
	I0816 22:14:26.322796  186299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/stopped-upgrade-20210816221221-6487/id_rsa Username:docker}
	I0816 22:14:26.331982  186299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/stopped-upgrade-20210816221221-6487/id_rsa Username:docker}
	I0816 22:14:26.409108  186299 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0816 22:14:26.444926  186299 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0816 22:14:26.455750  186299 docker.go:153] disabling docker service ...
	I0816 22:14:26.455805  186299 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:14:26.469038  186299 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:14:26.479302  186299 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:14:26.555483  186299 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:14:26.627497  186299 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:14:26.637358  186299 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:14:26.651589  186299 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.2"|' -i /etc/crio/crio.conf"
	I0816 22:14:26.659971  186299 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:14:26.665829  186299 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:14:26.665874  186299 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:14:26.672796  186299 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:14:26.678792  186299 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:14:26.731303  186299 ssh_runner.go:149] Run: sudo systemctl start crio
	I0816 22:14:26.816402  186299 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 22:14:26.816467  186299 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:14:26.820245  186299 retry.go:31] will retry after 1.104660288s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0816 22:14:27.926056  186299 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:14:27.929853  186299 retry.go:31] will retry after 2.160763633s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0816 22:14:30.092031  186299 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:14:30.096385  186299 retry.go:31] will retry after 2.62026012s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0816 22:14:32.717747  186299 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:14:32.721198  186299 retry.go:31] will retry after 3.164785382s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0816 22:14:35.888031  186299 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:14:35.891423  186299 retry.go:31] will retry after 4.680977329s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0816 22:14:40.573083  186299 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:14:40.576431  186299 retry.go:31] will retry after 9.01243771s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0816 22:14:49.589147  186299 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:14:49.592896  186299 retry.go:31] will retry after 6.442959172s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0816 22:14:56.038444  186299 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:14:56.042700  186299 retry.go:31] will retry after 11.217246954s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0816 22:15:07.260736  186299 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:15:07.264356  186299 retry.go:31] will retry after 15.299675834s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0816 22:15:22.564770  186299 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:15:22.570255  186299 out.go:177] 
	W0816 22:15:22.570371  186299 out.go:242] X Exiting due to RUNTIME_ENABLE: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	
	X Exiting due to RUNTIME_ENABLE: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	
	W0816 22:15:22.570382  186299 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0816 22:15:22.572408  186299 out.go:242] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                         │
	│                                                                                                                                                       │
	│    * Please attach the following file to the GitHub issue:                                                                                            │
	│    * - /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                         │
	│                                                                                                                                                       │
	│    * Please attach the following file to the GitHub issue:                                                                                            │
	│    * - /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 22:15:22.574091  186299 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:203: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-20210816221221-6487 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:613: *** TestStoppedBinaryUpgrade FAILED at 2021-08-16 22:15:22.592416922 +0000 UTC m=+2065.018547649
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStoppedBinaryUpgrade]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect stopped-upgrade-20210816221221-6487
helpers_test.go:236: (dbg) docker inspect stopped-upgrade-20210816221221-6487:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4f11c36c089aac22cdaa4a82ef87dc309fbff853c930fe38d8f973b781df9557",
	        "Created": "2021-08-16T22:12:22.99947447Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 186714,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-16T22:14:13.635375955Z",
	            "FinishedAt": "2021-08-16T22:14:11.653706059Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/4f11c36c089aac22cdaa4a82ef87dc309fbff853c930fe38d8f973b781df9557/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4f11c36c089aac22cdaa4a82ef87dc309fbff853c930fe38d8f973b781df9557/hostname",
	        "HostsPath": "/var/lib/docker/containers/4f11c36c089aac22cdaa4a82ef87dc309fbff853c930fe38d8f973b781df9557/hosts",
	        "LogPath": "/var/lib/docker/containers/4f11c36c089aac22cdaa4a82ef87dc309fbff853c930fe38d8f973b781df9557/4f11c36c089aac22cdaa4a82ef87dc309fbff853c930fe38d8f973b781df9557-json.log",
	        "Name": "/stopped-upgrade-20210816221221-6487",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "stopped-upgrade-20210816221221-6487:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a628545d382c8f2f4cce95e09f9818c83a810e82baa4eae7c2130c8afd9e0642-init/diff:/var/lib/docker/overlay2/de6af85d43ab6de82a80599c78c852ce945860493e987ae8d4747813e3e12e71/diff:/var/lib/docker/overlay2/1463f2b27e2cf184f9e8a7e127a3f6ecaa9eb4e8c586d13eb98ef0034f418eca/diff:/var/lib/docker/overlay2/6fae380631f93f264fc69450c6bd514661e47e2e598e586796b4ef5487d2609b/diff:/var/lib/docker/overlay2/9455405085a27b776dbc930a9422413a8738ee14a396dba1428ad3477dd78d19/diff:/var/lib/docker/overlay2/872cbd16ad0ea1d1a8643af87081f3ffd14a4cc7bb05e0117ff9630a1e4c2d63/diff:/var/lib/docker/overlay2/1cfe85b8b9110dde1cfd7cd18efd634d01d4c6b46da62d17a26da23aa02686be/diff:/var/lib/docker/overlay2/189b625246c097ae32fa419f11770e2e28b30b39afd65b82dc25c55530584d10/diff:/var/lib/docker/overlay2/f5b5179d9c5187ae940c59c3a026ef190561c0532770dbd761fecfc6251ebc05/diff:/var/lib/docker/overlay2/116a802d8be0890169902c8fcb2ad1b64b5391fa1a060c1f02d344668cf1e40f/diff:/var/lib/docker/overlay2/d335f4
f8874ac51d7120bb297af4bf45b5ab1c41f3977cabfa2149948695c6e9/diff:/var/lib/docker/overlay2/cfc70be91e8c4eaba2033239d05c70abdaaae7922eebe0a9694302cde2259694/diff:/var/lib/docker/overlay2/901fced2d4ec35a47265e02248dd5ae2f3130431109d25e604d2ab568d1bde04/diff:/var/lib/docker/overlay2/7aa7e86939390a956567b669d4bab83fb60927bb30f5a9803342e0d68bd3e23f/diff:/var/lib/docker/overlay2/a482a71267c1aded8aadff398336811f3437dec13bdea6065ac47ad1eb5eed5f/diff:/var/lib/docker/overlay2/972f22e2510a2c07193729807506aedac3ec49bb2063b2b7c3e443b7380a91c5/diff:/var/lib/docker/overlay2/8c845952b97a856c0093d30bbe000f51feda3cb8d3a525e83d8633d5af175938/diff:/var/lib/docker/overlay2/85f0f897ba04db0a863dd2628b8b2e7d3539cecbb6acd1530907b350763c6550/diff:/var/lib/docker/overlay2/f4060f75e85c12bf3ba15020ed3c17665bed2409afc88787b2341c6d5af01040/diff:/var/lib/docker/overlay2/7fa8f93d5ee1866f01fa7288d688713da7f1044a1942eb59534b94cb95cc3d74/diff:/var/lib/docker/overlay2/0d91418cf4c9ce3175fcb432fd443e696caae83859f6d5e10cdfaf102243e189/diff:/var/lib/d
ocker/overlay2/f4f812cd2dd5b0b125eea4bff29d3ed0d34fa877c492159a8b8b6aee1f536d4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a628545d382c8f2f4cce95e09f9818c83a810e82baa4eae7c2130c8afd9e0642/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a628545d382c8f2f4cce95e09f9818c83a810e82baa4eae7c2130c8afd9e0642/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a628545d382c8f2f4cce95e09f9818c83a810e82baa4eae7c2130c8afd9e0642/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "stopped-upgrade-20210816221221-6487",
	                "Source": "/var/lib/docker/volumes/stopped-upgrade-20210816221221-6487/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "stopped-upgrade-20210816221221-6487",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "stopped-upgrade-20210816221221-6487",
	                "name.minikube.sigs.k8s.io": "stopped-upgrade-20210816221221-6487",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4bb2e56c2c97a9010bc3d609c6cda57a84ade7433318314404aace54a3aedd44",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32909"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32908"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32907"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4bb2e56c2c97",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "cde310d350103c88e9d8c58db6c3e78c88d834e4ddb0fe82be51f29dad3bdc49",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "cebfb7616e1650d9168a8dbc1fe230ddb125e95fadce7ade8e6cad512b16f560",
	                    "EndpointID": "cde310d350103c88e9d8c58db6c3e78c88d834e4ddb0fe82be51f29dad3bdc49",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p stopped-upgrade-20210816221221-6487 -n stopped-upgrade-20210816221221-6487
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p stopped-upgrade-20210816221221-6487 -n stopped-upgrade-20210816221221-6487: exit status 6 (268.504317ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:15:22.897376  197535 status.go:413] kubeconfig endpoint: extract IP: "stopped-upgrade-20210816221221-6487" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 6 (may be ok)
helpers_test.go:242: "stopped-upgrade-20210816221221-6487" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:176: Cleaning up "stopped-upgrade-20210816221221-6487" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p stopped-upgrade-20210816221221-6487

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p stopped-upgrade-20210816221221-6487: (2.180097699s)
--- FAIL: TestStoppedBinaryUpgrade (183.45s)

                                                
                                    
x
+
TestPause/serial/Pause (116.87s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:107: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210816221349-6487 --alsologtostderr -v=5
pause_test.go:107: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-20210816221349-6487 --alsologtostderr -v=5: exit status 80 (1.907899355s)

                                                
                                                
-- stdout --
	* Pausing node pause-20210816221349-6487 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 22:15:38.951085  203052 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:15:38.951284  203052 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:15:38.951296  203052 out.go:311] Setting ErrFile to fd 2...
	I0816 22:15:38.951300  203052 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:15:38.951424  203052 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:15:38.951635  203052 out.go:305] Setting JSON to false
	I0816 22:15:38.951661  203052 mustload.go:65] Loading cluster: pause-20210816221349-6487
	I0816 22:15:38.952166  203052 config.go:177] Loaded profile config "pause-20210816221349-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 22:15:38.952746  203052 cli_runner.go:115] Run: docker container inspect pause-20210816221349-6487 --format={{.State.Status}}
	I0816 22:15:39.000757  203052 host.go:66] Checking if "pause-20210816221349-6487" exists ...
	I0816 22:15:39.001436  203052 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-20210816221349-6487 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0816 22:15:39.003785  203052 out.go:177] * Pausing node pause-20210816221349-6487 ... 
	I0816 22:15:39.003810  203052 host.go:66] Checking if "pause-20210816221349-6487" exists ...
	I0816 22:15:39.004178  203052 ssh_runner.go:149] Run: systemctl --version
	I0816 22:15:39.004228  203052 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210816221349-6487
	I0816 22:15:39.049137  203052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32901 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210816221349-6487/id_rsa Username:docker}
	I0816 22:15:39.148017  203052 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:15:39.156775  203052 pause.go:50] kubelet running: true
	I0816 22:15:39.156831  203052 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0816 22:15:39.282230  203052 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0816 22:15:39.282333  203052 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0816 22:15:39.365831  203052 cri.go:76] found id: "2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee"
	I0816 22:15:39.365864  203052 cri.go:76] found id: "a3847cf5a7a0a50db95228eb59c9b2f0fd2f919f5d3ca5657533960a22d80eeb"
	I0816 22:15:39.365871  203052 cri.go:76] found id: "ba2e9dd72df0118186b262b3a681bbc49788610178d61aaf44d6d85a7e156870"
	I0816 22:15:39.365878  203052 cri.go:76] found id: "1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04"
	I0816 22:15:39.365883  203052 cri.go:76] found id: "1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe"
	I0816 22:15:39.365890  203052 cri.go:76] found id: "8a5626e3acb8d6e6512e7bc8e176d7f98cbab112b7311beb2017dc277e1b04c1"
	I0816 22:15:39.365896  203052 cri.go:76] found id: "e812d329ba697b1e1640c0e4b259cdaef06f8cb4498e9c761789a26059eb18ef"
	I0816 22:15:39.365901  203052 cri.go:76] found id: "a65e43c156f4fceb68516be869bf03e5d844bd50bf1c543882409696da7e44b7"
	I0816 22:15:39.365907  203052 cri.go:76] found id: ""
	I0816 22:15:39.365949  203052 ssh_runner.go:149] Run: sudo runc list -f json
	I0816 22:15:39.404635  203052 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04","pid":2096,"status":"running","bundle":"/run/containers/storage/overlay-containers/1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04/userdata","rootfs":"/var/lib/containers/storage/overlay/5e3efe8268ff35cf0dd8a0eea598244b53cd42e17db8c35176abaf307881152b/merged","created":"2021-08-16T22:14:39.86033792Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"206453cd","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"206453cd\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationM
essagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:14:39.71358247Z","io.kubernetes.cri-o.Image":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.21.3","io.kubernetes.cri-o.ImageRef":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-njz9n\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a67f4cc2-55b9-43ee-a73c-16467b872fa0\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-njz9n_a67f4cc2-55b9-43ee-a73c-16467b872fa0/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/con
tainers/storage/overlay/5e3efe8268ff35cf0dd8a0eea598244b53cd42e17db8c35176abaf307881152b/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-njz9n_kube-system_a67f4cc2-55b9-43ee-a73c-16467b872fa0_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-njz9n_kube-system_a67f4cc2-55b9-43ee-a73c-16467b872fa0_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/
a67f4cc2-55b9-43ee-a73c-16467b872fa0/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a67f4cc2-55b9-43ee-a73c-16467b872fa0/containers/kube-proxy/a9e20a41\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/a67f4cc2-55b9-43ee-a73c-16467b872fa0/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/a67f4cc2-55b9-43ee-a73c-16467b872fa0/volumes/kubernetes.io~projected/kube-api-access-xt74x\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-njz9n","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a67f4cc2-55b9-43ee-a73c-16467b872fa0","kubernetes.io/config.seen":"2021-08-16T22:14:39.139237266Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.T
imeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe","pid":1330,"status":"running","bundle":"/run/containers/storage/overlay-containers/1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe/userdata","rootfs":"/var/lib/containers/storage/overlay/4c457105260fdbf1df736e6c94ff5078723ae934a8b810f9d4b328bbf0117cf1/merged","created":"2021-08-16T22:14:11.184120369Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ef5f3481","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ef5f3481\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\
":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:14:10.919160303Z","io.kubernetes.cri-o.Image":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.13-0","io.kubernetes.cri-o.ImageRef":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-20210816221349-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"80302e95ebcb53cf62a48fa24997db61\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210816221349-6487_80302e95ebcb53cf62a48fa24997db61/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage
/overlay/4c457105260fdbf1df736e6c94ff5078723ae934a8b810f9d4b328bbf0117cf1/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-20210816221349-6487_kube-system_80302e95ebcb53cf62a48fa24997db61_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-20210816221349-6487_kube-system_80302e95ebcb53cf62a48fa24997db61_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/80302e95ebcb53cf62a48fa24997db61/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/80302e95ebcb53cf62a48fa24997db61/conta
iners/etcd/35927707\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"80302e95ebcb53cf62a48fa24997db61","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"80302e95ebcb53cf62a48fa24997db61","kubernetes.io/config.seen":"2021-08-16T22:14:07.022751207Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee","pid":3662,"status":"running","bundle":"/run/containers/storage/overlay-c
ontainers/2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee/userdata","rootfs":"/var/lib/containers/storage/overlay/6f094131e65ab0dac08bd38816173827064bbf3fa41e0cb5285e07440d34effc/merged","created":"2021-08-16T22:15:38.128091963Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"c24abe1f","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"c24abe1f\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee","io.kubernetes.cri-o.ContainerType":"cont
ainer","io.kubernetes.cri-o.Created":"2021-08-16T22:15:38.02315922Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6f094131e65ab0dac08bd38816173827064bbf3fa41e0cb5285e07440d34effc/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-
system_1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b/containers/storage-provisioner/8e800500\",\"readonly\":false},{\"container_path\":\
"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b/volumes/kubernetes.io~projected/kube-api-access-p9dq6\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",
\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-16T22:15:37.567005729Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8","pid":2035,"status":"running","bundle":"/run/containers/storage/overlay-containers/4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8/userdata","rootfs":"/var/lib/containers/storage/overlay/01d055712545b80db5c901619ce4bf8bdd945febc35163b63ee7634064e51f98/merged","created":"2021-08-16T22:14:39.564330015Z","annotations":{"controller-revision-hash":"7cdcb64568","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2021-08-16T22:14:39.139237266Z\"}","io.kubernetes.cr
i-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-njz9n_kube-system_a67f4cc2-55b9-43ee-a73c-16467b872fa0_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:14:39.465199334Z","io.kubernetes.cri-o.HostName":"pause-20210816221349-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-njz9n","io.kubernetes.cri-o.Labels":"{\"controller-revision-hash\":\"7cdcb64568\",\"pod-template-generation\":\"1\",\"k8s-app\":\"kube-proxy\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"a67f4cc2-55b9-43ee-a73c-16467b872fa0\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.nam
e\":\"kube-proxy-njz9n\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-njz9n_a67f4cc2-55b9-43ee-a73c-16467b872fa0/4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-njz9n\",\"uid\":\"a67f4cc2-55b9-43ee-a73c-16467b872fa0\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/01d055712545b80db5c901619ce4bf8bdd945febc35163b63ee7634064e51f98/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-njz9n_kube-system_a67f4cc2-55b9-43ee-a73c-16467b872fa0_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.San
dboxID":"4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8/userdata/shm","io.kubernetes.pod.name":"kube-proxy-njz9n","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"a67f4cc2-55b9-43ee-a73c-16467b872fa0","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-08-16T22:14:39.139237266Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1","pid":3627,"status":"running","bundle":"/run/containers/storage/overlay-containers/4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1/userdata","rootfs":"/var/lib/containers/storage/overlay/f5a1462570c96e48a6d73c2acbeda0ea2aa03a6860638955e3cf463eaa2
74a8a/merged","created":"2021-08-16T22:15:37.972189986Z","annotations":{"addonmanager.kubernetes.io/mode":"Reconcile","integration-test":"storage-provisioner","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"addonmanager.kubernetes.io/mode\\\":\\\"Reconcile\\\",\\\"integration-test\\\":\\\"storage-provisioner\\\"},\\\"name\\\":\\\"storage-provisioner\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/storage-provisioner\\\"],\\\"image\\\":\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"storage-provisioner\\\",\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"storage-provisioner\\\",\\\"v
olumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/tmp\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"tmp\\\"}]}}\\n\",\"kubernetes.io/config.seen\":\"2021-08-16T22:15:37.567005729Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1","io.kubernetes.cri-o.ContainerName":"k8s_POD_storage-provisioner_kube-system_1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:15:37.882238552Z","io.kubernetes.cri-o.HostName":"pause-20210816221349-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"storage-provisioner","io.kubernetes.cri-o.Labels":"{\"integration-test\":\"storag
e-provisioner\",\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"storage-provisioner\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b/4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"uid\":\"1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f5a1462570c96e48a6d73c2acbeda0ea2aa03a6860638955e3cf463eaa274a8a/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_kube-system_1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri
-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1/userdata/shm","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"
command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-16T22:15:37.567005729Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df","pid":1171,"status":"running","bundle":"/run/containers/storage/overlay-containers/5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df/userdata","rootfs":"/var/lib/containers/storage/overlay/9cdc20954a09c5065fbf922db548413d8f6b6fa0ef8d7eac1f6e00d3cbe14840/merged","created":"2021-08-16T22:14:08.696695162Z","annotations":{"component":"kube-controller-m
anager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"11aa4bce4217eb6f1cd4eeaf87e646ed\",\"kubernetes.io/config.seen\":\"2021-08-16T22:14:07.022772584Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-pause-20210816221349-6487_kube-system_11aa4bce4217eb6f1cd4eeaf87e646ed_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:14:08.544105397Z","io.kubernetes.cri-o.HostName":"pause-20210816221349-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"
kube-controller-manager-pause-20210816221349-6487","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-controller-manager\",\"io.kubernetes.container.name\":\"POD\",\"tier\":\"control-plane\",\"io.kubernetes.pod.uid\":\"11aa4bce4217eb6f1cd4eeaf87e646ed\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210816221349-6487\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210816221349-6487_11aa4bce4217eb6f1cd4eeaf87e646ed/5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-pause-20210816221349-6487\",\"uid\":\"11aa4bce4217eb6f1cd4eeaf87e646ed\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9cdc20954a09c5065fbf922db548413d8f6b6fa0ef8d7eac1f6e00d3cbe14840/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-pause-20210816221349-6487_kube-system_11aa4bce4217eb6f1c
d4eeaf87e646ed_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"11aa4bce4217eb6f1cd4eeaf87e646ed","kubernetes.io/config.hash":"11aa4bce4217eb6f1cd4eeaf87e646ed","kubernetes.io/config.seen":"2021-08-16T22:14:07
.022772584Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8a5626e3acb8d6e6512e7bc8e176d7f98cbab112b7311beb2017dc277e1b04c1","pid":1328,"status":"running","bundle":"/run/containers/storage/overlay-containers/8a5626e3acb8d6e6512e7bc8e176d7f98cbab112b7311beb2017dc277e1b04c1/userdata","rootfs":"/var/lib/containers/storage/overlay/dcd39b2b9af5fcbb9e04eeae93a4ec590de2fd1d9d936aedf4cd5157d2e55b9f/merged","created":"2021-08-16T22:14:11.184099082Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"9336f224","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"9336f224\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubern
etes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8a5626e3acb8d6e6512e7bc8e176d7f98cbab112b7311beb2017dc277e1b04c1","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:14:10.90734537Z","io.kubernetes.cri-o.Image":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.21.3","io.kubernetes.cri-o.ImageRef":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210816221349-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"11aa4bce4217eb6f1cd4eeaf87e646ed\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-man
ager-pause-20210816221349-6487_11aa4bce4217eb6f1cd4eeaf87e646ed/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/dcd39b2b9af5fcbb9e04eeae93a4ec590de2fd1d9d936aedf4cd5157d2e55b9f/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-20210816221349-6487_kube-system_11aa4bce4217eb6f1cd4eeaf87e646ed_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-20210816221349-6487_kube-system_11aa4bce4217eb6f1cd4eeaf87e646ed_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.ku
bernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/11aa4bce4217eb6f1cd4eeaf87e646ed/containers/kube-controller-manager/d86fa3c7\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/11aa4bce4217eb6f1cd4eeaf87e646ed/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\
":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"11aa4bce4217eb6f1cd4eeaf87e646ed","kubernetes.io/config.hash":"11aa4bce4217eb6f1cd4eeaf87e646ed","kubernetes.io/config.seen":"2021-08-16T22:14:07.022772584Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1","pid":1170,"status":"running","bundle":"/run/containers/storage/overlay-containers/97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1/userdata","rootfs":"/var/lib/containers/storage/overlay/aec486b185af1e74ec192dad0
697bf78b5c340f42badf917e697c2a79be83df4/merged","created":"2021-08-16T22:14:08.70867872Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"c4caeeea162ae780eb6bff45a3346bb9\",\"kubernetes.io/config.seen\":\"2021-08-16T22:14:07.022773706Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-pause-20210816221349-6487_kube-system_c4caeeea162ae780eb6bff45a3346bb9_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:14:08.534025998Z","io.kubernetes.cri-o.HostName":"pause-20210816221349-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c93670
8eb64dff1d98c1/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-scheduler-pause-20210816221349-6487","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"c4caeeea162ae780eb6bff45a3346bb9\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210816221349-6487\",\"tier\":\"control-plane\",\"component\":\"kube-scheduler\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210816221349-6487_c4caeeea162ae780eb6bff45a3346bb9/97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-pause-20210816221349-6487\",\"uid\":\"c4caeeea162ae780eb6bff45a3346bb9\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/aec486b185af1e74ec192dad0697bf78b5c340f42badf917e697c2a79be83df4/merged","io.kubernetes.cri-o.Name":"k8s_kube-sch
eduler-pause-20210816221349-6487_kube-system_c4caeeea162ae780eb6bff45a3346bb9_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"c4caeeea162ae780eb6bff45a3346bb9","kubernetes.io/config.hash":"c4caeeea162ae780eb6bff45a334
6bb9","kubernetes.io/config.seen":"2021-08-16T22:14:07.022773706Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65","pid":1183,"status":"running","bundle":"/run/containers/storage/overlay-containers/a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65/userdata","rootfs":"/var/lib/containers/storage/overlay/a48846504bf19b36168fef627796d07b926ea4b09c6916617bfaf91ef98e28ac/merged","created":"2021-08-16T22:14:08.712573864Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"06505179e0af316cfe7c1c0c3697c38d\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.49.2:8443\",\"kubernetes.io/config.seen\":\"2021-08-16T22:14:07.022771
014Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-pause-20210816221349-6487_kube-system_06505179e0af316cfe7c1c0c3697c38d_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:14:08.538885494Z","io.kubernetes.cri-o.HostName":"pause-20210816221349-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-pause-20210816221349-6487","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"06505179e0af316cfe7c1c0c3697c38d\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210816221349-6487\",\"component\":\"kube-apiserver\
",\"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210816221349-6487_06505179e0af316cfe7c1c0c3697c38d/a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-pause-20210816221349-6487\",\"uid\":\"06505179e0af316cfe7c1c0c3697c38d\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a48846504bf19b36168fef627796d07b926ea4b09c6916617bfaf91ef98e28ac/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-pause-20210816221349-6487_kube-system_06505179e0af316cfe7c1c0c3697c38d_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a055e0d3dc6de9710363847b318bc6681a78c24731d7a2
88185b775afb8a0c65/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"06505179e0af316cfe7c1c0c3697c38d","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"06505179e0af316cfe7c1c0c3697c38d","kubernetes.io/config.seen":"2021-08-16T22:14:07.022771014Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a3847cf5a7a0a50db95228eb59c9b2f0fd2f919f5d3ca5657533960a22d80eeb","pid":2757,"status
":"running","bundle":"/run/containers/storage/overlay-containers/a3847cf5a7a0a50db95228eb59c9b2f0fd2f919f5d3ca5657533960a22d80eeb/userdata","rootfs":"/var/lib/containers/storage/overlay/1fc6b28f49f8ff64723ac65fe0fb7ef7e99e0ba3a02327ce4e06febd83fc2027/merged","created":"2021-08-16T22:15:29.976145781Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"51fdf088","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"51fdf088\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"
UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a3847cf5a7a0a50db95228eb59c9b2f0fd2f919f5d3ca5657533960a22d80eeb","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:15:29.849205018Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns/coredns:v1.8.0","io.kubernetes.cri-o.ImageRef":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kuberne
tes.pod.name\":\"coredns-558bd4d5db-7wcqt\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"46c03ad2-6959-421b-83b5-f2f596fc6ec6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-7wcqt_46c03ad2-6959-421b-83b5-f2f596fc6ec6/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1fc6b28f49f8ff64723ac65fe0fb7ef7e99e0ba3a02327ce4e06febd83fc2027/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7","io.kubernetes.cri-o.SandboxName":"k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0","io.kubernetes.cri-o.SeccompProfil
ePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/46c03ad2-6959-421b-83b5-f2f596fc6ec6/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/46c03ad2-6959-421b-83b5-f2f596fc6ec6/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/46c03ad2-6959-421b-83b5-f2f596fc6ec6/containers/coredns/1be8c54d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/46c03ad2-6959-421b-83b5-f2f596fc6ec6/volumes/kubernetes.io~projected/kube-api-access-f2tgp\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-558bd4d5db-7wcqt","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.
pod.uid":"46c03ad2-6959-421b-83b5-f2f596fc6ec6","kubernetes.io/config.seen":"2021-08-16T22:14:39.654892580Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a65e43c156f4fceb68516be869bf03e5d844bd50bf1c543882409696da7e44b7","pid":1308,"status":"running","bundle":"/run/containers/storage/overlay-containers/a65e43c156f4fceb68516be869bf03e5d844bd50bf1c543882409696da7e44b7/userdata","rootfs":"/var/lib/containers/storage/overlay/b8777fc30b98adde9a9b64714b92d54367d88cae0a83147e69c916cdadda2d34/merged","created":"2021-08-16T22:14:11.184083062Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"bde20ce","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.
Annotations":"{\"io.kubernetes.container.hash\":\"bde20ce\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a65e43c156f4fceb68516be869bf03e5d844bd50bf1c543882409696da7e44b7","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:14:10.895038631Z","io.kubernetes.cri-o.Image":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.21.3","io.kubernetes.cri-o.ImageRef":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210816221349-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c4caeeea162ae780eb6bf
f45a3346bb9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210816221349-6487_c4caeeea162ae780eb6bff45a3346bb9/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b8777fc30b98adde9a9b64714b92d54367d88cae0a83147e69c916cdadda2d34/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-20210816221349-6487_kube-system_c4caeeea162ae780eb6bff45a3346bb9_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-20210816221349-6487_kube-system_c4caeeea162ae780eb6bff45a3346bb9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","i
o.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c4caeeea162ae780eb6bff45a3346bb9/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c4caeeea162ae780eb6bff45a3346bb9/containers/kube-scheduler/8abcee8c\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c4caeeea162ae780eb6bff45a3346bb9","kubernetes.io/config.hash":"c4caeeea162ae780eb6bff45a3346bb9","kubernetes.io/config.seen":"2021-08-16T22:14:07.022773706Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion
":"1.0.2-dev","id":"ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7","pid":2726,"status":"running","bundle":"/run/containers/storage/overlay-containers/ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7/userdata","rootfs":"/var/lib/containers/storage/overlay/6e104fa91127cd895f78f5e754f5764f48aa7d4066943f77ae5f2ad85ab27fa0/merged","created":"2021-08-16T22:15:29.78822534Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-16T22:14:39.654892580Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"veth4a11b195\",\"mac\":\"ba:a0:a0:57:03:38\"},{\"name\":\"eth0\",\"mac\":\"f6:40:49:70:a6:2e\",\"sandbox\":\"/var/run/netns/72967a55-409d-4e22-a50b-fe735e218d4f\"}],\"ips\":[{\"version\":\"4\",\"interface\":1,\"address\":\"10.244.0.2/24\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\"}],\
"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:15:29.642943749Z","io.kubernetes.cri-o.HostName":"coredns-558bd4d5db-7wcqt","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-7wcqt","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"46c03ad2-6959-421b-83b5-f2f596fc6ec6\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-7wcqt\",\"pod-template-hash\":\"558bd4d5db\",\"k8s-app\":\"kube-dns\",\"io.k
ubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-7wcqt_46c03ad2-6959-421b-83b5-f2f596fc6ec6/ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-558bd4d5db-7wcqt\",\"uid\":\"46c03ad2-6959-421b-83b5-f2f596fc6ec6\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6e104fa91127cd895f78f5e754f5764f48aa7d4066943f77ae5f2ad85ab27fa0/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io
.kubernetes.cri-o.SandboxID":"ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-7wcqt","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"46c03ad2-6959-421b-83b5-f2f596fc6ec6","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-16T22:14:39.654892580Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ba2e9dd72df0118186b262b3a681bbc49788610178d61aaf44d6d85a7e156870","pid":2120,"status":"running","bundle":"/run/containers/storage/overlay-containers/ba2e9dd72df0118186b262b3a681bbc49788610178d61aaf44d6d85a7e156870/userdata","rootfs":"/var/lib/containers/storage/overlay/ba43f604db681f912becca341198b
ac52232145f6978e44ac056450178082df4/merged","created":"2021-08-16T22:14:39.964738228Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"39f7c29e","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"39f7c29e\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ba2e9dd72df0118186b262b3a681bbc49788610178d61aaf44d6d85a7e156870","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:14:39.737372629Z","io.kubernetes.cri-o.Image":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io
.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20210326-1e038dc5","io.kubernetes.cri-o.ImageRef":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-gqxwk\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"e4d14ead-1ca3-48aa-aafd-4199981ea73a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-gqxwk_e4d14ead-1ca3-48aa-aafd-4199981ea73a/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/ba43f604db681f912becca341198bac52232145f6978e44ac056450178082df4/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-gqxwk_kube-system_e4d14ead-1ca3-48aa-aafd-4199981ea73a_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b/userdata/resolv.conf","io
.kubernetes.cri-o.SandboxID":"d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-gqxwk_kube-system_e4d14ead-1ca3-48aa-aafd-4199981ea73a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/e4d14ead-1ca3-48aa-aafd-4199981ea73a/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/e4d14ead-1ca3-48aa-aafd-4199981ea73a/containers/kindnet-cni/88a22482\",\"readonly\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kuber
netes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/e4d14ead-1ca3-48aa-aafd-4199981ea73a/volumes/kubernetes.io~projected/kube-api-access-cfvz7\",\"readonly\":true}]","io.kubernetes.pod.name":"kindnet-gqxwk","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"e4d14ead-1ca3-48aa-aafd-4199981ea73a","kubernetes.io/config.seen":"2021-08-16T22:14:39.135166345Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292","pid":1191,"status":"running","bundle":"/run/containers/storage/overlay-containers/c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292/userdata","rootfs":"/var/lib/containers/storage/overlay/30b63de48fe4ab34e75dc2f244ebcb7ebcf1a95e023f36297b0dd21e309a7f35/merged","created":"2021-08-16T22:14:08.737155739Z","an
notations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.49.2:2379\",\"kubernetes.io/config.seen\":\"2021-08-16T22:14:07.022751207Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"80302e95ebcb53cf62a48fa24997db61\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-pause-20210816221349-6487_kube-system_80302e95ebcb53cf62a48fa24997db61_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:14:08.546986208Z","io.kubernetes.cri-o.HostName":"pause-20210816221349-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292/userdata/hostna
me","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-pause-20210816221349-6487","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"etcd-pause-20210816221349-6487\",\"component\":\"etcd\",\"tier\":\"control-plane\",\"io.kubernetes.pod.uid\":\"80302e95ebcb53cf62a48fa24997db61\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210816221349-6487_80302e95ebcb53cf62a48fa24997db61/c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-pause-20210816221349-6487\",\"uid\":\"80302e95ebcb53cf62a48fa24997db61\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/30b63de48fe4ab34e75dc2f244ebcb7ebcf1a95e023f36297b0dd21e309a7f35/merged","io.kubernetes.cri-o.Name":"k8s_etcd-pause-20210816221349-6487_kube-system_80302e95ebcb53cf62a48fa24997db61_0","io.kuber
netes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292/userdata/shm","io.kubernetes.pod.name":"etcd-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"80302e95ebcb53cf62a48fa24997db61","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"80302e95ebcb53cf62a48fa24997db61","kubernetes.io/
config.seen":"2021-08-16T22:14:07.022751207Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b","pid":2028,"status":"running","bundle":"/run/containers/storage/overlay-containers/d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b/userdata","rootfs":"/var/lib/containers/storage/overlay/b12afb63786aea20fd73b7ea7901fc04cd5bda0180622c3c4a04f24c3c4a0c62/merged","created":"2021-08-16T22:14:39.612071417Z","annotations":{"app":"kindnet","controller-revision-hash":"694b6fb659","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-16T22:14:39.135166345Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b","io.
kubernetes.cri-o.ContainerName":"k8s_POD_kindnet-gqxwk_kube-system_e4d14ead-1ca3-48aa-aafd-4199981ea73a_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:14:39.462425985Z","io.kubernetes.cri-o.HostName":"pause-20210816221349-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kindnet-gqxwk","io.kubernetes.cri-o.Labels":"{\"controller-revision-hash\":\"694b6fb659\",\"io.kubernetes.pod.uid\":\"e4d14ead-1ca3-48aa-aafd-4199981ea73a\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kindnet-gqxwk\",\"k8s-app\":\"kindnet\",\"app\":\"kindnet\",\"tier\":\"node\",\"pod-template-generation\":\"1\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-gqxwk_e4
d14ead-1ca3-48aa-aafd-4199981ea73a/d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-gqxwk\",\"uid\":\"e4d14ead-1ca3-48aa-aafd-4199981ea73a\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b12afb63786aea20fd73b7ea7901fc04cd5bda0180622c3c4a04f24c3c4a0c62/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-gqxwk_kube-system_e4d14ead-1ca3-48aa-aafd-4199981ea73a_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b","io.kubernetes.cri-o.SeccompProfil
ePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b/userdata/shm","io.kubernetes.pod.name":"kindnet-gqxwk","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"e4d14ead-1ca3-48aa-aafd-4199981ea73a","k8s-app":"kindnet","kubernetes.io/config.seen":"2021-08-16T22:14:39.135166345Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1","tier":"node"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e812d329ba697b1e1640c0e4b259cdaef06f8cb4498e9c761789a26059eb18ef","pid":1341,"status":"running","bundle":"/run/containers/storage/overlay-containers/e812d329ba697b1e1640c0e4b259cdaef06f8cb4498e9c761789a26059eb18ef/userdata","rootfs":"/var/lib/containers/storage/overlay/6d024a967569bd77936fb53bcfe77def7cd373ca66f8ed4e541d45c068932eae/merged","created":"2021-08-16T22:14:11.184099004Z","annotations":{"io.container.manager":"cri-o
","io.kubernetes.container.hash":"8cf05ddb","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"8cf05ddb\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e812d329ba697b1e1640c0e4b259cdaef06f8cb4498e9c761789a26059eb18ef","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:14:10.927039032Z","io.kubernetes.cri-o.Image":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.21.3","io.kubernetes.cri-o.ImageRef":"3d174f00aa39eb8552a9596610d87ae
90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210816221349-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"06505179e0af316cfe7c1c0c3697c38d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210816221349-6487_06505179e0af316cfe7c1c0c3697c38d/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6d024a967569bd77936fb53bcfe77def7cd373ca66f8ed4e541d45c068932eae/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-20210816221349-6487_kube-system_06505179e0af316cfe7c1c0c3697c38d_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a055e0d3dc6de9710363847b318bc
6681a78c24731d7a288185b775afb8a0c65","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-20210816221349-6487_kube-system_06505179e0af316cfe7c1c0c3697c38d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/06505179e0af316cfe7c1c0c3697c38d/containers/kube-apiserver/c36896cf\",\"readonly\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/06505179e0af316cfe7c1c0c3697c38d/etc-hosts\",\"readonly\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"hos
t_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"06505179e0af316cfe7c1c0c3697c38d","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"06505179e0af316cfe7c1c0c3697c38d","kubernetes.io/config.seen":"2021-08-16T22:14:07.022771014Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0816 22:15:39.405426  203052 cri.go:113] list returned 16 containers
	I0816 22:15:39.405443  203052 cri.go:116] container: {ID:1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04 Status:running}
	I0816 22:15:39.405488  203052 cri.go:116] container: {ID:1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe Status:running}
	I0816 22:15:39.405499  203052 cri.go:116] container: {ID:2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee Status:running}
	I0816 22:15:39.405504  203052 cri.go:116] container: {ID:4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8 Status:running}
	I0816 22:15:39.405513  203052 cri.go:118] skipping 4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8 - not in ps
	I0816 22:15:39.405519  203052 cri.go:116] container: {ID:4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1 Status:running}
	I0816 22:15:39.405524  203052 cri.go:118] skipping 4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1 - not in ps
	I0816 22:15:39.405530  203052 cri.go:116] container: {ID:5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df Status:running}
	I0816 22:15:39.405535  203052 cri.go:118] skipping 5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df - not in ps
	I0816 22:15:39.405542  203052 cri.go:116] container: {ID:8a5626e3acb8d6e6512e7bc8e176d7f98cbab112b7311beb2017dc277e1b04c1 Status:running}
	I0816 22:15:39.405551  203052 cri.go:116] container: {ID:97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1 Status:running}
	I0816 22:15:39.405562  203052 cri.go:118] skipping 97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1 - not in ps
	I0816 22:15:39.405567  203052 cri.go:116] container: {ID:a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65 Status:running}
	I0816 22:15:39.405577  203052 cri.go:118] skipping a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65 - not in ps
	I0816 22:15:39.405583  203052 cri.go:116] container: {ID:a3847cf5a7a0a50db95228eb59c9b2f0fd2f919f5d3ca5657533960a22d80eeb Status:running}
	I0816 22:15:39.405593  203052 cri.go:116] container: {ID:a65e43c156f4fceb68516be869bf03e5d844bd50bf1c543882409696da7e44b7 Status:running}
	I0816 22:15:39.405600  203052 cri.go:116] container: {ID:ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7 Status:running}
	I0816 22:15:39.405605  203052 cri.go:118] skipping ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7 - not in ps
	I0816 22:15:39.405608  203052 cri.go:116] container: {ID:ba2e9dd72df0118186b262b3a681bbc49788610178d61aaf44d6d85a7e156870 Status:running}
	I0816 22:15:39.405616  203052 cri.go:116] container: {ID:c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292 Status:running}
	I0816 22:15:39.405620  203052 cri.go:118] skipping c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292 - not in ps
	I0816 22:15:39.405627  203052 cri.go:116] container: {ID:d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b Status:running}
	I0816 22:15:39.405631  203052 cri.go:118] skipping d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b - not in ps
	I0816 22:15:39.405637  203052 cri.go:116] container: {ID:e812d329ba697b1e1640c0e4b259cdaef06f8cb4498e9c761789a26059eb18ef Status:running}
	I0816 22:15:39.405679  203052 ssh_runner.go:149] Run: sudo runc pause 1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04
	I0816 22:15:39.421455  203052 ssh_runner.go:149] Run: sudo runc pause 1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04 1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe
	I0816 22:15:39.435085  203052 retry.go:31] will retry after 276.165072ms: runc: sudo runc pause 1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04 1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:15:39Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0816 22:15:39.711532  203052 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:15:39.721453  203052 pause.go:50] kubelet running: false
	I0816 22:15:39.721508  203052 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0816 22:15:39.839836  203052 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0816 22:15:39.839930  203052 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0816 22:15:39.915974  203052 cri.go:76] found id: "2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee"
	I0816 22:15:39.916002  203052 cri.go:76] found id: "a3847cf5a7a0a50db95228eb59c9b2f0fd2f919f5d3ca5657533960a22d80eeb"
	I0816 22:15:39.916007  203052 cri.go:76] found id: "ba2e9dd72df0118186b262b3a681bbc49788610178d61aaf44d6d85a7e156870"
	I0816 22:15:39.916010  203052 cri.go:76] found id: "1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04"
	I0816 22:15:39.916014  203052 cri.go:76] found id: "1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe"
	I0816 22:15:39.916018  203052 cri.go:76] found id: "8a5626e3acb8d6e6512e7bc8e176d7f98cbab112b7311beb2017dc277e1b04c1"
	I0816 22:15:39.916022  203052 cri.go:76] found id: "e812d329ba697b1e1640c0e4b259cdaef06f8cb4498e9c761789a26059eb18ef"
	I0816 22:15:39.916025  203052 cri.go:76] found id: "a65e43c156f4fceb68516be869bf03e5d844bd50bf1c543882409696da7e44b7"
	I0816 22:15:39.916028  203052 cri.go:76] found id: ""
	I0816 22:15:39.916064  203052 ssh_runner.go:149] Run: sudo runc list -f json
	I0816 22:15:39.965320  203052 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04","pid":2096,"status":"paused","bundle":"/run/containers/storage/overlay-containers/1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04/userdata","rootfs":"/var/lib/containers/storage/overlay/5e3efe8268ff35cf0dd8a0eea598244b53cd42e17db8c35176abaf307881152b/merged","created":"2021-08-16T22:14:39.86033792Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"206453cd","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"206453cd\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMe
ssagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:14:39.71358247Z","io.kubernetes.cri-o.Image":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.21.3","io.kubernetes.cri-o.ImageRef":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-njz9n\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a67f4cc2-55b9-43ee-a73c-16467b872fa0\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-njz9n_a67f4cc2-55b9-43ee-a73c-16467b872fa0/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/cont
ainers/storage/overlay/5e3efe8268ff35cf0dd8a0eea598244b53cd42e17db8c35176abaf307881152b/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-njz9n_kube-system_a67f4cc2-55b9-43ee-a73c-16467b872fa0_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-njz9n_kube-system_a67f4cc2-55b9-43ee-a73c-16467b872fa0_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a
67f4cc2-55b9-43ee-a73c-16467b872fa0/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a67f4cc2-55b9-43ee-a73c-16467b872fa0/containers/kube-proxy/a9e20a41\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/a67f4cc2-55b9-43ee-a73c-16467b872fa0/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/a67f4cc2-55b9-43ee-a73c-16467b872fa0/volumes/kubernetes.io~projected/kube-api-access-xt74x\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-njz9n","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a67f4cc2-55b9-43ee-a73c-16467b872fa0","kubernetes.io/config.seen":"2021-08-16T22:14:39.139237266Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.Ti
meoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe","pid":1330,"status":"running","bundle":"/run/containers/storage/overlay-containers/1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe/userdata","rootfs":"/var/lib/containers/storage/overlay/4c457105260fdbf1df736e6c94ff5078723ae934a8b810f9d4b328bbf0117cf1/merged","created":"2021-08-16T22:14:11.184120369Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ef5f3481","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ef5f3481\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\"
:\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:14:10.919160303Z","io.kubernetes.cri-o.Image":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.13-0","io.kubernetes.cri-o.ImageRef":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-20210816221349-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"80302e95ebcb53cf62a48fa24997db61\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210816221349-6487_80302e95ebcb53cf62a48fa24997db61/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/
overlay/4c457105260fdbf1df736e6c94ff5078723ae934a8b810f9d4b328bbf0117cf1/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-20210816221349-6487_kube-system_80302e95ebcb53cf62a48fa24997db61_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-20210816221349-6487_kube-system_80302e95ebcb53cf62a48fa24997db61_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/80302e95ebcb53cf62a48fa24997db61/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/80302e95ebcb53cf62a48fa24997db61/contai
ners/etcd/35927707\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"80302e95ebcb53cf62a48fa24997db61","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"80302e95ebcb53cf62a48fa24997db61","kubernetes.io/config.seen":"2021-08-16T22:14:07.022751207Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee","pid":3662,"status":"running","bundle":"/run/containers/storage/overlay-co
ntainers/2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee/userdata","rootfs":"/var/lib/containers/storage/overlay/6f094131e65ab0dac08bd38816173827064bbf3fa41e0cb5285e07440d34effc/merged","created":"2021-08-16T22:15:38.128091963Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"c24abe1f","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"c24abe1f\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee","io.kubernetes.cri-o.ContainerType":"conta
iner","io.kubernetes.cri-o.Created":"2021-08-16T22:15:38.02315922Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6f094131e65ab0dac08bd38816173827064bbf3fa41e0cb5285e07440d34effc/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-s
ystem_1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b/containers/storage-provisioner/8e800500\",\"readonly\":false},{\"container_path\":\"
/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b/volumes/kubernetes.io~projected/kube-api-access-p9dq6\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\
"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-16T22:15:37.567005729Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8","pid":2035,"status":"running","bundle":"/run/containers/storage/overlay-containers/4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8/userdata","rootfs":"/var/lib/containers/storage/overlay/01d055712545b80db5c901619ce4bf8bdd945febc35163b63ee7634064e51f98/merged","created":"2021-08-16T22:14:39.564330015Z","annotations":{"controller-revision-hash":"7cdcb64568","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2021-08-16T22:14:39.139237266Z\"}","io.kubernetes.cri
-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-njz9n_kube-system_a67f4cc2-55b9-43ee-a73c-16467b872fa0_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:14:39.465199334Z","io.kubernetes.cri-o.HostName":"pause-20210816221349-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-njz9n","io.kubernetes.cri-o.Labels":"{\"controller-revision-hash\":\"7cdcb64568\",\"pod-template-generation\":\"1\",\"k8s-app\":\"kube-proxy\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"a67f4cc2-55b9-43ee-a73c-16467b872fa0\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name
\":\"kube-proxy-njz9n\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-njz9n_a67f4cc2-55b9-43ee-a73c-16467b872fa0/4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-njz9n\",\"uid\":\"a67f4cc2-55b9-43ee-a73c-16467b872fa0\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/01d055712545b80db5c901619ce4bf8bdd945febc35163b63ee7634064e51f98/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-njz9n_kube-system_a67f4cc2-55b9-43ee-a73c-16467b872fa0_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.Sand
boxID":"4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8/userdata/shm","io.kubernetes.pod.name":"kube-proxy-njz9n","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"a67f4cc2-55b9-43ee-a73c-16467b872fa0","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-08-16T22:14:39.139237266Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1","pid":3627,"status":"running","bundle":"/run/containers/storage/overlay-containers/4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1/userdata","rootfs":"/var/lib/containers/storage/overlay/f5a1462570c96e48a6d73c2acbeda0ea2aa03a6860638955e3cf463eaa27
4a8a/merged","created":"2021-08-16T22:15:37.972189986Z","annotations":{"addonmanager.kubernetes.io/mode":"Reconcile","integration-test":"storage-provisioner","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"addonmanager.kubernetes.io/mode\\\":\\\"Reconcile\\\",\\\"integration-test\\\":\\\"storage-provisioner\\\"},\\\"name\\\":\\\"storage-provisioner\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/storage-provisioner\\\"],\\\"image\\\":\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"storage-provisioner\\\",\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"storage-provisioner\\\",\\\"vo
lumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/tmp\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"tmp\\\"}]}}\\n\",\"kubernetes.io/config.seen\":\"2021-08-16T22:15:37.567005729Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1","io.kubernetes.cri-o.ContainerName":"k8s_POD_storage-provisioner_kube-system_1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:15:37.882238552Z","io.kubernetes.cri-o.HostName":"pause-20210816221349-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"storage-provisioner","io.kubernetes.cri-o.Labels":"{\"integration-test\":\"storage
-provisioner\",\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"storage-provisioner\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b/4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"uid\":\"1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f5a1462570c96e48a6d73c2acbeda0ea2aa03a6860638955e3cf463eaa274a8a/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_kube-system_1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-
o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1/userdata/shm","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"c
ommand\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-16T22:15:37.567005729Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df","pid":1171,"status":"running","bundle":"/run/containers/storage/overlay-containers/5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df/userdata","rootfs":"/var/lib/containers/storage/overlay/9cdc20954a09c5065fbf922db548413d8f6b6fa0ef8d7eac1f6e00d3cbe14840/merged","created":"2021-08-16T22:14:08.696695162Z","annotations":{"component":"kube-controller-ma
nager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"11aa4bce4217eb6f1cd4eeaf87e646ed\",\"kubernetes.io/config.seen\":\"2021-08-16T22:14:07.022772584Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-pause-20210816221349-6487_kube-system_11aa4bce4217eb6f1cd4eeaf87e646ed_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:14:08.544105397Z","io.kubernetes.cri-o.HostName":"pause-20210816221349-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"k
ube-controller-manager-pause-20210816221349-6487","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-controller-manager\",\"io.kubernetes.container.name\":\"POD\",\"tier\":\"control-plane\",\"io.kubernetes.pod.uid\":\"11aa4bce4217eb6f1cd4eeaf87e646ed\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210816221349-6487\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210816221349-6487_11aa4bce4217eb6f1cd4eeaf87e646ed/5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-pause-20210816221349-6487\",\"uid\":\"11aa4bce4217eb6f1cd4eeaf87e646ed\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9cdc20954a09c5065fbf922db548413d8f6b6fa0ef8d7eac1f6e00d3cbe14840/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-pause-20210816221349-6487_kube-system_11aa4bce4217eb6f1cd
4eeaf87e646ed_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"11aa4bce4217eb6f1cd4eeaf87e646ed","kubernetes.io/config.hash":"11aa4bce4217eb6f1cd4eeaf87e646ed","kubernetes.io/config.seen":"2021-08-16T22:14:07.
022772584Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8a5626e3acb8d6e6512e7bc8e176d7f98cbab112b7311beb2017dc277e1b04c1","pid":1328,"status":"running","bundle":"/run/containers/storage/overlay-containers/8a5626e3acb8d6e6512e7bc8e176d7f98cbab112b7311beb2017dc277e1b04c1/userdata","rootfs":"/var/lib/containers/storage/overlay/dcd39b2b9af5fcbb9e04eeae93a4ec590de2fd1d9d936aedf4cd5157d2e55b9f/merged","created":"2021-08-16T22:14:11.184099082Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"9336f224","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"9336f224\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kuberne
tes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8a5626e3acb8d6e6512e7bc8e176d7f98cbab112b7311beb2017dc277e1b04c1","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:14:10.90734537Z","io.kubernetes.cri-o.Image":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.21.3","io.kubernetes.cri-o.ImageRef":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210816221349-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"11aa4bce4217eb6f1cd4eeaf87e646ed\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-mana
ger-pause-20210816221349-6487_11aa4bce4217eb6f1cd4eeaf87e646ed/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/dcd39b2b9af5fcbb9e04eeae93a4ec590de2fd1d9d936aedf4cd5157d2e55b9f/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-20210816221349-6487_kube-system_11aa4bce4217eb6f1cd4eeaf87e646ed_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-20210816221349-6487_kube-system_11aa4bce4217eb6f1cd4eeaf87e646ed_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kub
ernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/11aa4bce4217eb6f1cd4eeaf87e646ed/containers/kube-controller-manager/d86fa3c7\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/11aa4bce4217eb6f1cd4eeaf87e646ed/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\"
:true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"11aa4bce4217eb6f1cd4eeaf87e646ed","kubernetes.io/config.hash":"11aa4bce4217eb6f1cd4eeaf87e646ed","kubernetes.io/config.seen":"2021-08-16T22:14:07.022772584Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1","pid":1170,"status":"running","bundle":"/run/containers/storage/overlay-containers/97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1/userdata","rootfs":"/var/lib/containers/storage/overlay/aec486b185af1e74ec192dad06
97bf78b5c340f42badf917e697c2a79be83df4/merged","created":"2021-08-16T22:14:08.70867872Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"c4caeeea162ae780eb6bff45a3346bb9\",\"kubernetes.io/config.seen\":\"2021-08-16T22:14:07.022773706Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-pause-20210816221349-6487_kube-system_c4caeeea162ae780eb6bff45a3346bb9_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:14:08.534025998Z","io.kubernetes.cri-o.HostName":"pause-20210816221349-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708
eb64dff1d98c1/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-scheduler-pause-20210816221349-6487","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"c4caeeea162ae780eb6bff45a3346bb9\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210816221349-6487\",\"tier\":\"control-plane\",\"component\":\"kube-scheduler\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210816221349-6487_c4caeeea162ae780eb6bff45a3346bb9/97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-pause-20210816221349-6487\",\"uid\":\"c4caeeea162ae780eb6bff45a3346bb9\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/aec486b185af1e74ec192dad0697bf78b5c340f42badf917e697c2a79be83df4/merged","io.kubernetes.cri-o.Name":"k8s_kube-sche
duler-pause-20210816221349-6487_kube-system_c4caeeea162ae780eb6bff45a3346bb9_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"c4caeeea162ae780eb6bff45a3346bb9","kubernetes.io/config.hash":"c4caeeea162ae780eb6bff45a3346
bb9","kubernetes.io/config.seen":"2021-08-16T22:14:07.022773706Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65","pid":1183,"status":"running","bundle":"/run/containers/storage/overlay-containers/a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65/userdata","rootfs":"/var/lib/containers/storage/overlay/a48846504bf19b36168fef627796d07b926ea4b09c6916617bfaf91ef98e28ac/merged","created":"2021-08-16T22:14:08.712573864Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"06505179e0af316cfe7c1c0c3697c38d\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.49.2:8443\",\"kubernetes.io/config.seen\":\"2021-08-16T22:14:07.0227710
14Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-pause-20210816221349-6487_kube-system_06505179e0af316cfe7c1c0c3697c38d_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:14:08.538885494Z","io.kubernetes.cri-o.HostName":"pause-20210816221349-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-pause-20210816221349-6487","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"06505179e0af316cfe7c1c0c3697c38d\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210816221349-6487\",\"component\":\"kube-apiserver\"
,\"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210816221349-6487_06505179e0af316cfe7c1c0c3697c38d/a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-pause-20210816221349-6487\",\"uid\":\"06505179e0af316cfe7c1c0c3697c38d\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a48846504bf19b36168fef627796d07b926ea4b09c6916617bfaf91ef98e28ac/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-pause-20210816221349-6487_kube-system_06505179e0af316cfe7c1c0c3697c38d_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a055e0d3dc6de9710363847b318bc6681a78c24731d7a28
8185b775afb8a0c65/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"06505179e0af316cfe7c1c0c3697c38d","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"06505179e0af316cfe7c1c0c3697c38d","kubernetes.io/config.seen":"2021-08-16T22:14:07.022771014Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a3847cf5a7a0a50db95228eb59c9b2f0fd2f919f5d3ca5657533960a22d80eeb","pid":2757,"status"
:"running","bundle":"/run/containers/storage/overlay-containers/a3847cf5a7a0a50db95228eb59c9b2f0fd2f919f5d3ca5657533960a22d80eeb/userdata","rootfs":"/var/lib/containers/storage/overlay/1fc6b28f49f8ff64723ac65fe0fb7ef7e99e0ba3a02327ce4e06febd83fc2027/merged","created":"2021-08-16T22:15:29.976145781Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"51fdf088","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"51fdf088\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"U
DP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a3847cf5a7a0a50db95228eb59c9b2f0fd2f919f5d3ca5657533960a22d80eeb","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:15:29.849205018Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns/coredns:v1.8.0","io.kubernetes.cri-o.ImageRef":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernet
es.pod.name\":\"coredns-558bd4d5db-7wcqt\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"46c03ad2-6959-421b-83b5-f2f596fc6ec6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-7wcqt_46c03ad2-6959-421b-83b5-f2f596fc6ec6/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1fc6b28f49f8ff64723ac65fe0fb7ef7e99e0ba3a02327ce4e06febd83fc2027/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7","io.kubernetes.cri-o.SandboxName":"k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0","io.kubernetes.cri-o.SeccompProfile
Path":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/46c03ad2-6959-421b-83b5-f2f596fc6ec6/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/46c03ad2-6959-421b-83b5-f2f596fc6ec6/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/46c03ad2-6959-421b-83b5-f2f596fc6ec6/containers/coredns/1be8c54d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/46c03ad2-6959-421b-83b5-f2f596fc6ec6/volumes/kubernetes.io~projected/kube-api-access-f2tgp\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-558bd4d5db-7wcqt","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.p
od.uid":"46c03ad2-6959-421b-83b5-f2f596fc6ec6","kubernetes.io/config.seen":"2021-08-16T22:14:39.654892580Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a65e43c156f4fceb68516be869bf03e5d844bd50bf1c543882409696da7e44b7","pid":1308,"status":"running","bundle":"/run/containers/storage/overlay-containers/a65e43c156f4fceb68516be869bf03e5d844bd50bf1c543882409696da7e44b7/userdata","rootfs":"/var/lib/containers/storage/overlay/b8777fc30b98adde9a9b64714b92d54367d88cae0a83147e69c916cdadda2d34/merged","created":"2021-08-16T22:14:11.184083062Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"bde20ce","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.A
nnotations":"{\"io.kubernetes.container.hash\":\"bde20ce\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a65e43c156f4fceb68516be869bf03e5d844bd50bf1c543882409696da7e44b7","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:14:10.895038631Z","io.kubernetes.cri-o.Image":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.21.3","io.kubernetes.cri-o.ImageRef":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210816221349-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c4caeeea162ae780eb6bff
45a3346bb9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210816221349-6487_c4caeeea162ae780eb6bff45a3346bb9/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b8777fc30b98adde9a9b64714b92d54367d88cae0a83147e69c916cdadda2d34/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-20210816221349-6487_kube-system_c4caeeea162ae780eb6bff45a3346bb9_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-20210816221349-6487_kube-system_c4caeeea162ae780eb6bff45a3346bb9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io
.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c4caeeea162ae780eb6bff45a3346bb9/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c4caeeea162ae780eb6bff45a3346bb9/containers/kube-scheduler/8abcee8c\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c4caeeea162ae780eb6bff45a3346bb9","kubernetes.io/config.hash":"c4caeeea162ae780eb6bff45a3346bb9","kubernetes.io/config.seen":"2021-08-16T22:14:07.022773706Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion"
:"1.0.2-dev","id":"ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7","pid":2726,"status":"running","bundle":"/run/containers/storage/overlay-containers/ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7/userdata","rootfs":"/var/lib/containers/storage/overlay/6e104fa91127cd895f78f5e754f5764f48aa7d4066943f77ae5f2ad85ab27fa0/merged","created":"2021-08-16T22:15:29.78822534Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-16T22:14:39.654892580Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"veth4a11b195\",\"mac\":\"ba:a0:a0:57:03:38\"},{\"name\":\"eth0\",\"mac\":\"f6:40:49:70:a6:2e\",\"sandbox\":\"/var/run/netns/72967a55-409d-4e22-a50b-fe735e218d4f\"}],\"ips\":[{\"version\":\"4\",\"interface\":1,\"address\":\"10.244.0.2/24\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\"}],\"
dns\":{}}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:15:29.642943749Z","io.kubernetes.cri-o.HostName":"coredns-558bd4d5db-7wcqt","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-7wcqt","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"46c03ad2-6959-421b-83b5-f2f596fc6ec6\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-7wcqt\",\"pod-template-hash\":\"558bd4d5db\",\"k8s-app\":\"kube-dns\",\"io.ku
bernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-7wcqt_46c03ad2-6959-421b-83b5-f2f596fc6ec6/ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-558bd4d5db-7wcqt\",\"uid\":\"46c03ad2-6959-421b-83b5-f2f596fc6ec6\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6e104fa91127cd895f78f5e754f5764f48aa7d4066943f77ae5f2ad85ab27fa0/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.
kubernetes.cri-o.SandboxID":"ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-7wcqt","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"46c03ad2-6959-421b-83b5-f2f596fc6ec6","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-16T22:14:39.654892580Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ba2e9dd72df0118186b262b3a681bbc49788610178d61aaf44d6d85a7e156870","pid":2120,"status":"running","bundle":"/run/containers/storage/overlay-containers/ba2e9dd72df0118186b262b3a681bbc49788610178d61aaf44d6d85a7e156870/userdata","rootfs":"/var/lib/containers/storage/overlay/ba43f604db681f912becca341198ba
c52232145f6978e44ac056450178082df4/merged","created":"2021-08-16T22:14:39.964738228Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"39f7c29e","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"39f7c29e\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ba2e9dd72df0118186b262b3a681bbc49788610178d61aaf44d6d85a7e156870","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:14:39.737372629Z","io.kubernetes.cri-o.Image":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.
kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20210326-1e038dc5","io.kubernetes.cri-o.ImageRef":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-gqxwk\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"e4d14ead-1ca3-48aa-aafd-4199981ea73a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-gqxwk_e4d14ead-1ca3-48aa-aafd-4199981ea73a/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/ba43f604db681f912becca341198bac52232145f6978e44ac056450178082df4/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-gqxwk_kube-system_e4d14ead-1ca3-48aa-aafd-4199981ea73a_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b/userdata/resolv.conf","io.
kubernetes.cri-o.SandboxID":"d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-gqxwk_kube-system_e4d14ead-1ca3-48aa-aafd-4199981ea73a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/e4d14ead-1ca3-48aa-aafd-4199981ea73a/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/e4d14ead-1ca3-48aa-aafd-4199981ea73a/containers/kindnet-cni/88a22482\",\"readonly\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubern
etes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/e4d14ead-1ca3-48aa-aafd-4199981ea73a/volumes/kubernetes.io~projected/kube-api-access-cfvz7\",\"readonly\":true}]","io.kubernetes.pod.name":"kindnet-gqxwk","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"e4d14ead-1ca3-48aa-aafd-4199981ea73a","kubernetes.io/config.seen":"2021-08-16T22:14:39.135166345Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292","pid":1191,"status":"running","bundle":"/run/containers/storage/overlay-containers/c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292/userdata","rootfs":"/var/lib/containers/storage/overlay/30b63de48fe4ab34e75dc2f244ebcb7ebcf1a95e023f36297b0dd21e309a7f35/merged","created":"2021-08-16T22:14:08.737155739Z","ann
otations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.49.2:2379\",\"kubernetes.io/config.seen\":\"2021-08-16T22:14:07.022751207Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"80302e95ebcb53cf62a48fa24997db61\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-pause-20210816221349-6487_kube-system_80302e95ebcb53cf62a48fa24997db61_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:14:08.546986208Z","io.kubernetes.cri-o.HostName":"pause-20210816221349-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292/userdata/hostnam
e","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-pause-20210816221349-6487","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"etcd-pause-20210816221349-6487\",\"component\":\"etcd\",\"tier\":\"control-plane\",\"io.kubernetes.pod.uid\":\"80302e95ebcb53cf62a48fa24997db61\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210816221349-6487_80302e95ebcb53cf62a48fa24997db61/c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-pause-20210816221349-6487\",\"uid\":\"80302e95ebcb53cf62a48fa24997db61\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/30b63de48fe4ab34e75dc2f244ebcb7ebcf1a95e023f36297b0dd21e309a7f35/merged","io.kubernetes.cri-o.Name":"k8s_etcd-pause-20210816221349-6487_kube-system_80302e95ebcb53cf62a48fa24997db61_0","io.kubern
etes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292/userdata/shm","io.kubernetes.pod.name":"etcd-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"80302e95ebcb53cf62a48fa24997db61","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"80302e95ebcb53cf62a48fa24997db61","kubernetes.io/c
onfig.seen":"2021-08-16T22:14:07.022751207Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b","pid":2028,"status":"running","bundle":"/run/containers/storage/overlay-containers/d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b/userdata","rootfs":"/var/lib/containers/storage/overlay/b12afb63786aea20fd73b7ea7901fc04cd5bda0180622c3c4a04f24c3c4a0c62/merged","created":"2021-08-16T22:14:39.612071417Z","annotations":{"app":"kindnet","controller-revision-hash":"694b6fb659","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-16T22:14:39.135166345Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b","io.k
ubernetes.cri-o.ContainerName":"k8s_POD_kindnet-gqxwk_kube-system_e4d14ead-1ca3-48aa-aafd-4199981ea73a_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:14:39.462425985Z","io.kubernetes.cri-o.HostName":"pause-20210816221349-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kindnet-gqxwk","io.kubernetes.cri-o.Labels":"{\"controller-revision-hash\":\"694b6fb659\",\"io.kubernetes.pod.uid\":\"e4d14ead-1ca3-48aa-aafd-4199981ea73a\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kindnet-gqxwk\",\"k8s-app\":\"kindnet\",\"app\":\"kindnet\",\"tier\":\"node\",\"pod-template-generation\":\"1\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-gqxwk_e4d
14ead-1ca3-48aa-aafd-4199981ea73a/d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-gqxwk\",\"uid\":\"e4d14ead-1ca3-48aa-aafd-4199981ea73a\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b12afb63786aea20fd73b7ea7901fc04cd5bda0180622c3c4a04f24c3c4a0c62/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-gqxwk_kube-system_e4d14ead-1ca3-48aa-aafd-4199981ea73a_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b","io.kubernetes.cri-o.SeccompProfile
Path":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b/userdata/shm","io.kubernetes.pod.name":"kindnet-gqxwk","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"e4d14ead-1ca3-48aa-aafd-4199981ea73a","k8s-app":"kindnet","kubernetes.io/config.seen":"2021-08-16T22:14:39.135166345Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1","tier":"node"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e812d329ba697b1e1640c0e4b259cdaef06f8cb4498e9c761789a26059eb18ef","pid":1341,"status":"running","bundle":"/run/containers/storage/overlay-containers/e812d329ba697b1e1640c0e4b259cdaef06f8cb4498e9c761789a26059eb18ef/userdata","rootfs":"/var/lib/containers/storage/overlay/6d024a967569bd77936fb53bcfe77def7cd373ca66f8ed4e541d45c068932eae/merged","created":"2021-08-16T22:14:11.184099004Z","annotations":{"io.container.manager":"cri-o"
,"io.kubernetes.container.hash":"8cf05ddb","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"8cf05ddb\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e812d329ba697b1e1640c0e4b259cdaef06f8cb4498e9c761789a26059eb18ef","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:14:10.927039032Z","io.kubernetes.cri-o.Image":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.21.3","io.kubernetes.cri-o.ImageRef":"3d174f00aa39eb8552a9596610d87ae9
0e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210816221349-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"06505179e0af316cfe7c1c0c3697c38d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210816221349-6487_06505179e0af316cfe7c1c0c3697c38d/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6d024a967569bd77936fb53bcfe77def7cd373ca66f8ed4e541d45c068932eae/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-20210816221349-6487_kube-system_06505179e0af316cfe7c1c0c3697c38d_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a055e0d3dc6de9710363847b318bc6
681a78c24731d7a288185b775afb8a0c65","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-20210816221349-6487_kube-system_06505179e0af316cfe7c1c0c3697c38d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/06505179e0af316cfe7c1c0c3697c38d/containers/kube-apiserver/c36896cf\",\"readonly\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/06505179e0af316cfe7c1c0c3697c38d/etc-hosts\",\"readonly\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host
_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"06505179e0af316cfe7c1c0c3697c38d","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"06505179e0af316cfe7c1c0c3697c38d","kubernetes.io/config.seen":"2021-08-16T22:14:07.022771014Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0816 22:15:39.966233  203052 cri.go:113] list returned 16 containers
	I0816 22:15:39.966252  203052 cri.go:116] container: {ID:1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04 Status:paused}
	I0816 22:15:39.966266  203052 cri.go:122] skipping {1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04 paused}: state = "paused", want "running"
	I0816 22:15:39.966282  203052 cri.go:116] container: {ID:1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe Status:running}
	I0816 22:15:39.966288  203052 cri.go:116] container: {ID:2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee Status:running}
	I0816 22:15:39.966296  203052 cri.go:116] container: {ID:4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8 Status:running}
	I0816 22:15:39.966301  203052 cri.go:118] skipping 4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8 - not in ps
	I0816 22:15:39.966305  203052 cri.go:116] container: {ID:4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1 Status:running}
	I0816 22:15:39.966310  203052 cri.go:118] skipping 4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1 - not in ps
	I0816 22:15:39.966313  203052 cri.go:116] container: {ID:5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df Status:running}
	I0816 22:15:39.966317  203052 cri.go:118] skipping 5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df - not in ps
	I0816 22:15:39.966321  203052 cri.go:116] container: {ID:8a5626e3acb8d6e6512e7bc8e176d7f98cbab112b7311beb2017dc277e1b04c1 Status:running}
	I0816 22:15:39.966325  203052 cri.go:116] container: {ID:97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1 Status:running}
	I0816 22:15:39.966329  203052 cri.go:118] skipping 97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1 - not in ps
	I0816 22:15:39.966333  203052 cri.go:116] container: {ID:a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65 Status:running}
	I0816 22:15:39.966337  203052 cri.go:118] skipping a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65 - not in ps
	I0816 22:15:39.966341  203052 cri.go:116] container: {ID:a3847cf5a7a0a50db95228eb59c9b2f0fd2f919f5d3ca5657533960a22d80eeb Status:running}
	I0816 22:15:39.966345  203052 cri.go:116] container: {ID:a65e43c156f4fceb68516be869bf03e5d844bd50bf1c543882409696da7e44b7 Status:running}
	I0816 22:15:39.966349  203052 cri.go:116] container: {ID:ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7 Status:running}
	I0816 22:15:39.966354  203052 cri.go:118] skipping ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7 - not in ps
	I0816 22:15:39.966360  203052 cri.go:116] container: {ID:ba2e9dd72df0118186b262b3a681bbc49788610178d61aaf44d6d85a7e156870 Status:running}
	I0816 22:15:39.966364  203052 cri.go:116] container: {ID:c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292 Status:running}
	I0816 22:15:39.966368  203052 cri.go:118] skipping c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292 - not in ps
	I0816 22:15:39.966371  203052 cri.go:116] container: {ID:d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b Status:running}
	I0816 22:15:39.966375  203052 cri.go:118] skipping d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b - not in ps
	I0816 22:15:39.966379  203052 cri.go:116] container: {ID:e812d329ba697b1e1640c0e4b259cdaef06f8cb4498e9c761789a26059eb18ef Status:running}
	I0816 22:15:39.966411  203052 ssh_runner.go:149] Run: sudo runc pause 1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe
	I0816 22:15:39.982974  203052 ssh_runner.go:149] Run: sudo runc pause 1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe 2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee
	I0816 22:15:39.997132  203052 retry.go:31] will retry after 540.190908ms: runc: sudo runc pause 1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe 2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:15:39Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0816 22:15:40.537472  203052 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:15:40.547556  203052 pause.go:50] kubelet running: false
	I0816 22:15:40.547609  203052 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0816 22:15:40.656306  203052 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0816 22:15:40.656375  203052 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0816 22:15:40.729713  203052 cri.go:76] found id: "2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee"
	I0816 22:15:40.729739  203052 cri.go:76] found id: "a3847cf5a7a0a50db95228eb59c9b2f0fd2f919f5d3ca5657533960a22d80eeb"
	I0816 22:15:40.729744  203052 cri.go:76] found id: "ba2e9dd72df0118186b262b3a681bbc49788610178d61aaf44d6d85a7e156870"
	I0816 22:15:40.729749  203052 cri.go:76] found id: "1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04"
	I0816 22:15:40.729752  203052 cri.go:76] found id: "1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe"
	I0816 22:15:40.729757  203052 cri.go:76] found id: "8a5626e3acb8d6e6512e7bc8e176d7f98cbab112b7311beb2017dc277e1b04c1"
	I0816 22:15:40.729760  203052 cri.go:76] found id: "e812d329ba697b1e1640c0e4b259cdaef06f8cb4498e9c761789a26059eb18ef"
	I0816 22:15:40.729763  203052 cri.go:76] found id: "a65e43c156f4fceb68516be869bf03e5d844bd50bf1c543882409696da7e44b7"
	I0816 22:15:40.729767  203052 cri.go:76] found id: ""
	I0816 22:15:40.729806  203052 ssh_runner.go:149] Run: sudo runc list -f json
	I0816 22:15:40.767478  203052 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04","pid":2096,"status":"paused","bundle":"/run/containers/storage/overlay-containers/1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04/userdata","rootfs":"/var/lib/containers/storage/overlay/5e3efe8268ff35cf0dd8a0eea598244b53cd42e17db8c35176abaf307881152b/merged","created":"2021-08-16T22:14:39.86033792Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"206453cd","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"206453cd\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMe
ssagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:14:39.71358247Z","io.kubernetes.cri-o.Image":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.21.3","io.kubernetes.cri-o.ImageRef":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-njz9n\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a67f4cc2-55b9-43ee-a73c-16467b872fa0\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-njz9n_a67f4cc2-55b9-43ee-a73c-16467b872fa0/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/cont
ainers/storage/overlay/5e3efe8268ff35cf0dd8a0eea598244b53cd42e17db8c35176abaf307881152b/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-njz9n_kube-system_a67f4cc2-55b9-43ee-a73c-16467b872fa0_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-njz9n_kube-system_a67f4cc2-55b9-43ee-a73c-16467b872fa0_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a
67f4cc2-55b9-43ee-a73c-16467b872fa0/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a67f4cc2-55b9-43ee-a73c-16467b872fa0/containers/kube-proxy/a9e20a41\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/a67f4cc2-55b9-43ee-a73c-16467b872fa0/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/a67f4cc2-55b9-43ee-a73c-16467b872fa0/volumes/kubernetes.io~projected/kube-api-access-xt74x\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-njz9n","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a67f4cc2-55b9-43ee-a73c-16467b872fa0","kubernetes.io/config.seen":"2021-08-16T22:14:39.139237266Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.Ti
meoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe","pid":1330,"status":"paused","bundle":"/run/containers/storage/overlay-containers/1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe/userdata","rootfs":"/var/lib/containers/storage/overlay/4c457105260fdbf1df736e6c94ff5078723ae934a8b810f9d4b328bbf0117cf1/merged","created":"2021-08-16T22:14:11.184120369Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ef5f3481","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ef5f3481\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":
\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:14:10.919160303Z","io.kubernetes.cri-o.Image":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.13-0","io.kubernetes.cri-o.ImageRef":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-20210816221349-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"80302e95ebcb53cf62a48fa24997db61\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210816221349-6487_80302e95ebcb53cf62a48fa24997db61/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/o
verlay/4c457105260fdbf1df736e6c94ff5078723ae934a8b810f9d4b328bbf0117cf1/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-20210816221349-6487_kube-system_80302e95ebcb53cf62a48fa24997db61_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-20210816221349-6487_kube-system_80302e95ebcb53cf62a48fa24997db61_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/80302e95ebcb53cf62a48fa24997db61/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/80302e95ebcb53cf62a48fa24997db61/contain
ers/etcd/35927707\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"80302e95ebcb53cf62a48fa24997db61","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"80302e95ebcb53cf62a48fa24997db61","kubernetes.io/config.seen":"2021-08-16T22:14:07.022751207Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee","pid":3662,"status":"running","bundle":"/run/containers/storage/overlay-con
tainers/2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee/userdata","rootfs":"/var/lib/containers/storage/overlay/6f094131e65ab0dac08bd38816173827064bbf3fa41e0cb5285e07440d34effc/merged","created":"2021-08-16T22:15:38.128091963Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"c24abe1f","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"c24abe1f\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee","io.kubernetes.cri-o.ContainerType":"contai
ner","io.kubernetes.cri-o.Created":"2021-08-16T22:15:38.02315922Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6f094131e65ab0dac08bd38816173827064bbf3fa41e0cb5285e07440d34effc/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-sy
stem_1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b/containers/storage-provisioner/8e800500\",\"readonly\":false},{\"container_path\":\"/
var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b/volumes/kubernetes.io~projected/kube-api-access-p9dq6\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"
volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-16T22:15:37.567005729Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8","pid":2035,"status":"running","bundle":"/run/containers/storage/overlay-containers/4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8/userdata","rootfs":"/var/lib/containers/storage/overlay/01d055712545b80db5c901619ce4bf8bdd945febc35163b63ee7634064e51f98/merged","created":"2021-08-16T22:14:39.564330015Z","annotations":{"controller-revision-hash":"7cdcb64568","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2021-08-16T22:14:39.139237266Z\"}","io.kubernetes.cri-
o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-njz9n_kube-system_a67f4cc2-55b9-43ee-a73c-16467b872fa0_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:14:39.465199334Z","io.kubernetes.cri-o.HostName":"pause-20210816221349-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-njz9n","io.kubernetes.cri-o.Labels":"{\"controller-revision-hash\":\"7cdcb64568\",\"pod-template-generation\":\"1\",\"k8s-app\":\"kube-proxy\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"a67f4cc2-55b9-43ee-a73c-16467b872fa0\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\
":\"kube-proxy-njz9n\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-njz9n_a67f4cc2-55b9-43ee-a73c-16467b872fa0/4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-njz9n\",\"uid\":\"a67f4cc2-55b9-43ee-a73c-16467b872fa0\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/01d055712545b80db5c901619ce4bf8bdd945febc35163b63ee7634064e51f98/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-njz9n_kube-system_a67f4cc2-55b9-43ee-a73c-16467b872fa0_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.Sandb
oxID":"4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8/userdata/shm","io.kubernetes.pod.name":"kube-proxy-njz9n","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"a67f4cc2-55b9-43ee-a73c-16467b872fa0","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-08-16T22:14:39.139237266Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1","pid":3627,"status":"running","bundle":"/run/containers/storage/overlay-containers/4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1/userdata","rootfs":"/var/lib/containers/storage/overlay/f5a1462570c96e48a6d73c2acbeda0ea2aa03a6860638955e3cf463eaa274
a8a/merged","created":"2021-08-16T22:15:37.972189986Z","annotations":{"addonmanager.kubernetes.io/mode":"Reconcile","integration-test":"storage-provisioner","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"addonmanager.kubernetes.io/mode\\\":\\\"Reconcile\\\",\\\"integration-test\\\":\\\"storage-provisioner\\\"},\\\"name\\\":\\\"storage-provisioner\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/storage-provisioner\\\"],\\\"image\\\":\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"storage-provisioner\\\",\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"storage-provisioner\\\",\\\"vol
umes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/tmp\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"tmp\\\"}]}}\\n\",\"kubernetes.io/config.seen\":\"2021-08-16T22:15:37.567005729Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1","io.kubernetes.cri-o.ContainerName":"k8s_POD_storage-provisioner_kube-system_1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:15:37.882238552Z","io.kubernetes.cri-o.HostName":"pause-20210816221349-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"storage-provisioner","io.kubernetes.cri-o.Labels":"{\"integration-test\":\"storage-
provisioner\",\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"storage-provisioner\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b/4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"uid\":\"1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f5a1462570c96e48a6d73c2acbeda0ea2aa03a6860638955e3cf463eaa274a8a/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_kube-system_1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o
.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1/userdata/shm","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"co
mmand\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-16T22:15:37.567005729Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df","pid":1171,"status":"running","bundle":"/run/containers/storage/overlay-containers/5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df/userdata","rootfs":"/var/lib/containers/storage/overlay/9cdc20954a09c5065fbf922db548413d8f6b6fa0ef8d7eac1f6e00d3cbe14840/merged","created":"2021-08-16T22:14:08.696695162Z","annotations":{"component":"kube-controller-man
ager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"11aa4bce4217eb6f1cd4eeaf87e646ed\",\"kubernetes.io/config.seen\":\"2021-08-16T22:14:07.022772584Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-pause-20210816221349-6487_kube-system_11aa4bce4217eb6f1cd4eeaf87e646ed_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:14:08.544105397Z","io.kubernetes.cri-o.HostName":"pause-20210816221349-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"ku
be-controller-manager-pause-20210816221349-6487","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-controller-manager\",\"io.kubernetes.container.name\":\"POD\",\"tier\":\"control-plane\",\"io.kubernetes.pod.uid\":\"11aa4bce4217eb6f1cd4eeaf87e646ed\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210816221349-6487\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210816221349-6487_11aa4bce4217eb6f1cd4eeaf87e646ed/5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-pause-20210816221349-6487\",\"uid\":\"11aa4bce4217eb6f1cd4eeaf87e646ed\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9cdc20954a09c5065fbf922db548413d8f6b6fa0ef8d7eac1f6e00d3cbe14840/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-pause-20210816221349-6487_kube-system_11aa4bce4217eb6f1cd4
eeaf87e646ed_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"11aa4bce4217eb6f1cd4eeaf87e646ed","kubernetes.io/config.hash":"11aa4bce4217eb6f1cd4eeaf87e646ed","kubernetes.io/config.seen":"2021-08-16T22:14:07.0
22772584Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8a5626e3acb8d6e6512e7bc8e176d7f98cbab112b7311beb2017dc277e1b04c1","pid":1328,"status":"running","bundle":"/run/containers/storage/overlay-containers/8a5626e3acb8d6e6512e7bc8e176d7f98cbab112b7311beb2017dc277e1b04c1/userdata","rootfs":"/var/lib/containers/storage/overlay/dcd39b2b9af5fcbb9e04eeae93a4ec590de2fd1d9d936aedf4cd5157d2e55b9f/merged","created":"2021-08-16T22:14:11.184099082Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"9336f224","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"9336f224\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernet
es.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8a5626e3acb8d6e6512e7bc8e176d7f98cbab112b7311beb2017dc277e1b04c1","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:14:10.90734537Z","io.kubernetes.cri-o.Image":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.21.3","io.kubernetes.cri-o.ImageRef":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210816221349-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"11aa4bce4217eb6f1cd4eeaf87e646ed\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manag
er-pause-20210816221349-6487_11aa4bce4217eb6f1cd4eeaf87e646ed/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/dcd39b2b9af5fcbb9e04eeae93a4ec590de2fd1d9d936aedf4cd5157d2e55b9f/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-20210816221349-6487_kube-system_11aa4bce4217eb6f1cd4eeaf87e646ed_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-20210816221349-6487_kube-system_11aa4bce4217eb6f1cd4eeaf87e646ed_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kube
rnetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/11aa4bce4217eb6f1cd4eeaf87e646ed/containers/kube-controller-manager/d86fa3c7\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/11aa4bce4217eb6f1cd4eeaf87e646ed/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":
true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"11aa4bce4217eb6f1cd4eeaf87e646ed","kubernetes.io/config.hash":"11aa4bce4217eb6f1cd4eeaf87e646ed","kubernetes.io/config.seen":"2021-08-16T22:14:07.022772584Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1","pid":1170,"status":"running","bundle":"/run/containers/storage/overlay-containers/97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1/userdata","rootfs":"/var/lib/containers/storage/overlay/aec486b185af1e74ec192dad069
7bf78b5c340f42badf917e697c2a79be83df4/merged","created":"2021-08-16T22:14:08.70867872Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"c4caeeea162ae780eb6bff45a3346bb9\",\"kubernetes.io/config.seen\":\"2021-08-16T22:14:07.022773706Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-pause-20210816221349-6487_kube-system_c4caeeea162ae780eb6bff45a3346bb9_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:14:08.534025998Z","io.kubernetes.cri-o.HostName":"pause-20210816221349-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708e
b64dff1d98c1/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-scheduler-pause-20210816221349-6487","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"c4caeeea162ae780eb6bff45a3346bb9\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210816221349-6487\",\"tier\":\"control-plane\",\"component\":\"kube-scheduler\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210816221349-6487_c4caeeea162ae780eb6bff45a3346bb9/97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-pause-20210816221349-6487\",\"uid\":\"c4caeeea162ae780eb6bff45a3346bb9\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/aec486b185af1e74ec192dad0697bf78b5c340f42badf917e697c2a79be83df4/merged","io.kubernetes.cri-o.Name":"k8s_kube-sched
uler-pause-20210816221349-6487_kube-system_c4caeeea162ae780eb6bff45a3346bb9_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"c4caeeea162ae780eb6bff45a3346bb9","kubernetes.io/config.hash":"c4caeeea162ae780eb6bff45a3346b
b9","kubernetes.io/config.seen":"2021-08-16T22:14:07.022773706Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65","pid":1183,"status":"running","bundle":"/run/containers/storage/overlay-containers/a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65/userdata","rootfs":"/var/lib/containers/storage/overlay/a48846504bf19b36168fef627796d07b926ea4b09c6916617bfaf91ef98e28ac/merged","created":"2021-08-16T22:14:08.712573864Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"06505179e0af316cfe7c1c0c3697c38d\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.49.2:8443\",\"kubernetes.io/config.seen\":\"2021-08-16T22:14:07.02277101
4Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-pause-20210816221349-6487_kube-system_06505179e0af316cfe7c1c0c3697c38d_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:14:08.538885494Z","io.kubernetes.cri-o.HostName":"pause-20210816221349-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-pause-20210816221349-6487","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"06505179e0af316cfe7c1c0c3697c38d\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210816221349-6487\",\"component\":\"kube-apiserver\",
\"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210816221349-6487_06505179e0af316cfe7c1c0c3697c38d/a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-pause-20210816221349-6487\",\"uid\":\"06505179e0af316cfe7c1c0c3697c38d\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a48846504bf19b36168fef627796d07b926ea4b09c6916617bfaf91ef98e28ac/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-pause-20210816221349-6487_kube-system_06505179e0af316cfe7c1c0c3697c38d_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a055e0d3dc6de9710363847b318bc6681a78c24731d7a288
185b775afb8a0c65/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"06505179e0af316cfe7c1c0c3697c38d","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"06505179e0af316cfe7c1c0c3697c38d","kubernetes.io/config.seen":"2021-08-16T22:14:07.022771014Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a3847cf5a7a0a50db95228eb59c9b2f0fd2f919f5d3ca5657533960a22d80eeb","pid":2757,"status":
"running","bundle":"/run/containers/storage/overlay-containers/a3847cf5a7a0a50db95228eb59c9b2f0fd2f919f5d3ca5657533960a22d80eeb/userdata","rootfs":"/var/lib/containers/storage/overlay/1fc6b28f49f8ff64723ac65fe0fb7ef7e99e0ba3a02327ce4e06febd83fc2027/merged","created":"2021-08-16T22:15:29.976145781Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"51fdf088","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"51fdf088\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UD
P\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a3847cf5a7a0a50db95228eb59c9b2f0fd2f919f5d3ca5657533960a22d80eeb","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:15:29.849205018Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns/coredns:v1.8.0","io.kubernetes.cri-o.ImageRef":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernete
s.pod.name\":\"coredns-558bd4d5db-7wcqt\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"46c03ad2-6959-421b-83b5-f2f596fc6ec6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-7wcqt_46c03ad2-6959-421b-83b5-f2f596fc6ec6/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1fc6b28f49f8ff64723ac65fe0fb7ef7e99e0ba3a02327ce4e06febd83fc2027/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7","io.kubernetes.cri-o.SandboxName":"k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0","io.kubernetes.cri-o.SeccompProfileP
ath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/46c03ad2-6959-421b-83b5-f2f596fc6ec6/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/46c03ad2-6959-421b-83b5-f2f596fc6ec6/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/46c03ad2-6959-421b-83b5-f2f596fc6ec6/containers/coredns/1be8c54d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/46c03ad2-6959-421b-83b5-f2f596fc6ec6/volumes/kubernetes.io~projected/kube-api-access-f2tgp\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-558bd4d5db-7wcqt","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.po
d.uid":"46c03ad2-6959-421b-83b5-f2f596fc6ec6","kubernetes.io/config.seen":"2021-08-16T22:14:39.654892580Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a65e43c156f4fceb68516be869bf03e5d844bd50bf1c543882409696da7e44b7","pid":1308,"status":"running","bundle":"/run/containers/storage/overlay-containers/a65e43c156f4fceb68516be869bf03e5d844bd50bf1c543882409696da7e44b7/userdata","rootfs":"/var/lib/containers/storage/overlay/b8777fc30b98adde9a9b64714b92d54367d88cae0a83147e69c916cdadda2d34/merged","created":"2021-08-16T22:14:11.184083062Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"bde20ce","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.An
notations":"{\"io.kubernetes.container.hash\":\"bde20ce\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a65e43c156f4fceb68516be869bf03e5d844bd50bf1c543882409696da7e44b7","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:14:10.895038631Z","io.kubernetes.cri-o.Image":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.21.3","io.kubernetes.cri-o.ImageRef":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210816221349-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c4caeeea162ae780eb6bff4
5a3346bb9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210816221349-6487_c4caeeea162ae780eb6bff45a3346bb9/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b8777fc30b98adde9a9b64714b92d54367d88cae0a83147e69c916cdadda2d34/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-20210816221349-6487_kube-system_c4caeeea162ae780eb6bff45a3346bb9_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-20210816221349-6487_kube-system_c4caeeea162ae780eb6bff45a3346bb9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.
kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c4caeeea162ae780eb6bff45a3346bb9/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c4caeeea162ae780eb6bff45a3346bb9/containers/kube-scheduler/8abcee8c\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c4caeeea162ae780eb6bff45a3346bb9","kubernetes.io/config.hash":"c4caeeea162ae780eb6bff45a3346bb9","kubernetes.io/config.seen":"2021-08-16T22:14:07.022773706Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":
"1.0.2-dev","id":"ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7","pid":2726,"status":"running","bundle":"/run/containers/storage/overlay-containers/ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7/userdata","rootfs":"/var/lib/containers/storage/overlay/6e104fa91127cd895f78f5e754f5764f48aa7d4066943f77ae5f2ad85ab27fa0/merged","created":"2021-08-16T22:15:29.78822534Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-16T22:14:39.654892580Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"veth4a11b195\",\"mac\":\"ba:a0:a0:57:03:38\"},{\"name\":\"eth0\",\"mac\":\"f6:40:49:70:a6:2e\",\"sandbox\":\"/var/run/netns/72967a55-409d-4e22-a50b-fe735e218d4f\"}],\"ips\":[{\"version\":\"4\",\"interface\":1,\"address\":\"10.244.0.2/24\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\"}],\"d
ns\":{}}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:15:29.642943749Z","io.kubernetes.cri-o.HostName":"coredns-558bd4d5db-7wcqt","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-7wcqt","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"46c03ad2-6959-421b-83b5-f2f596fc6ec6\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-7wcqt\",\"pod-template-hash\":\"558bd4d5db\",\"k8s-app\":\"kube-dns\",\"io.kub
ernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-7wcqt_46c03ad2-6959-421b-83b5-f2f596fc6ec6/ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-558bd4d5db-7wcqt\",\"uid\":\"46c03ad2-6959-421b-83b5-f2f596fc6ec6\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6e104fa91127cd895f78f5e754f5764f48aa7d4066943f77ae5f2ad85ab27fa0/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.k
ubernetes.cri-o.SandboxID":"ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-7wcqt","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"46c03ad2-6959-421b-83b5-f2f596fc6ec6","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-16T22:14:39.654892580Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ba2e9dd72df0118186b262b3a681bbc49788610178d61aaf44d6d85a7e156870","pid":2120,"status":"running","bundle":"/run/containers/storage/overlay-containers/ba2e9dd72df0118186b262b3a681bbc49788610178d61aaf44d6d85a7e156870/userdata","rootfs":"/var/lib/containers/storage/overlay/ba43f604db681f912becca341198bac
52232145f6978e44ac056450178082df4/merged","created":"2021-08-16T22:14:39.964738228Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"39f7c29e","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"39f7c29e\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ba2e9dd72df0118186b262b3a681bbc49788610178d61aaf44d6d85a7e156870","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:14:39.737372629Z","io.kubernetes.cri-o.Image":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.k
ubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20210326-1e038dc5","io.kubernetes.cri-o.ImageRef":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-gqxwk\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"e4d14ead-1ca3-48aa-aafd-4199981ea73a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-gqxwk_e4d14ead-1ca3-48aa-aafd-4199981ea73a/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/ba43f604db681f912becca341198bac52232145f6978e44ac056450178082df4/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-gqxwk_kube-system_e4d14ead-1ca3-48aa-aafd-4199981ea73a_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b/userdata/resolv.conf","io.k
ubernetes.cri-o.SandboxID":"d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-gqxwk_kube-system_e4d14ead-1ca3-48aa-aafd-4199981ea73a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/e4d14ead-1ca3-48aa-aafd-4199981ea73a/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/e4d14ead-1ca3-48aa-aafd-4199981ea73a/containers/kindnet-cni/88a22482\",\"readonly\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kuberne
tes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/e4d14ead-1ca3-48aa-aafd-4199981ea73a/volumes/kubernetes.io~projected/kube-api-access-cfvz7\",\"readonly\":true}]","io.kubernetes.pod.name":"kindnet-gqxwk","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"e4d14ead-1ca3-48aa-aafd-4199981ea73a","kubernetes.io/config.seen":"2021-08-16T22:14:39.135166345Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292","pid":1191,"status":"running","bundle":"/run/containers/storage/overlay-containers/c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292/userdata","rootfs":"/var/lib/containers/storage/overlay/30b63de48fe4ab34e75dc2f244ebcb7ebcf1a95e023f36297b0dd21e309a7f35/merged","created":"2021-08-16T22:14:08.737155739Z","anno
tations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.49.2:2379\",\"kubernetes.io/config.seen\":\"2021-08-16T22:14:07.022751207Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"80302e95ebcb53cf62a48fa24997db61\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-pause-20210816221349-6487_kube-system_80302e95ebcb53cf62a48fa24997db61_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:14:08.546986208Z","io.kubernetes.cri-o.HostName":"pause-20210816221349-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292/userdata/hostname
","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-pause-20210816221349-6487","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"etcd-pause-20210816221349-6487\",\"component\":\"etcd\",\"tier\":\"control-plane\",\"io.kubernetes.pod.uid\":\"80302e95ebcb53cf62a48fa24997db61\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210816221349-6487_80302e95ebcb53cf62a48fa24997db61/c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-pause-20210816221349-6487\",\"uid\":\"80302e95ebcb53cf62a48fa24997db61\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/30b63de48fe4ab34e75dc2f244ebcb7ebcf1a95e023f36297b0dd21e309a7f35/merged","io.kubernetes.cri-o.Name":"k8s_etcd-pause-20210816221349-6487_kube-system_80302e95ebcb53cf62a48fa24997db61_0","io.kuberne
tes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292/userdata/shm","io.kubernetes.pod.name":"etcd-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"80302e95ebcb53cf62a48fa24997db61","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"80302e95ebcb53cf62a48fa24997db61","kubernetes.io/co
nfig.seen":"2021-08-16T22:14:07.022751207Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b","pid":2028,"status":"running","bundle":"/run/containers/storage/overlay-containers/d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b/userdata","rootfs":"/var/lib/containers/storage/overlay/b12afb63786aea20fd73b7ea7901fc04cd5bda0180622c3c4a04f24c3c4a0c62/merged","created":"2021-08-16T22:14:39.612071417Z","annotations":{"app":"kindnet","controller-revision-hash":"694b6fb659","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-16T22:14:39.135166345Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b","io.ku
bernetes.cri-o.ContainerName":"k8s_POD_kindnet-gqxwk_kube-system_e4d14ead-1ca3-48aa-aafd-4199981ea73a_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:14:39.462425985Z","io.kubernetes.cri-o.HostName":"pause-20210816221349-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kindnet-gqxwk","io.kubernetes.cri-o.Labels":"{\"controller-revision-hash\":\"694b6fb659\",\"io.kubernetes.pod.uid\":\"e4d14ead-1ca3-48aa-aafd-4199981ea73a\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kindnet-gqxwk\",\"k8s-app\":\"kindnet\",\"app\":\"kindnet\",\"tier\":\"node\",\"pod-template-generation\":\"1\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-gqxwk_e4d1
4ead-1ca3-48aa-aafd-4199981ea73a/d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-gqxwk\",\"uid\":\"e4d14ead-1ca3-48aa-aafd-4199981ea73a\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b12afb63786aea20fd73b7ea7901fc04cd5bda0180622c3c4a04f24c3c4a0c62/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-gqxwk_kube-system_e4d14ead-1ca3-48aa-aafd-4199981ea73a_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b","io.kubernetes.cri-o.SeccompProfileP
ath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b/userdata/shm","io.kubernetes.pod.name":"kindnet-gqxwk","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"e4d14ead-1ca3-48aa-aafd-4199981ea73a","k8s-app":"kindnet","kubernetes.io/config.seen":"2021-08-16T22:14:39.135166345Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1","tier":"node"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e812d329ba697b1e1640c0e4b259cdaef06f8cb4498e9c761789a26059eb18ef","pid":1341,"status":"running","bundle":"/run/containers/storage/overlay-containers/e812d329ba697b1e1640c0e4b259cdaef06f8cb4498e9c761789a26059eb18ef/userdata","rootfs":"/var/lib/containers/storage/overlay/6d024a967569bd77936fb53bcfe77def7cd373ca66f8ed4e541d45c068932eae/merged","created":"2021-08-16T22:14:11.184099004Z","annotations":{"io.container.manager":"cri-o",
"io.kubernetes.container.hash":"8cf05ddb","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"8cf05ddb\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e812d329ba697b1e1640c0e4b259cdaef06f8cb4498e9c761789a26059eb18ef","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:14:10.927039032Z","io.kubernetes.cri-o.Image":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.21.3","io.kubernetes.cri-o.ImageRef":"3d174f00aa39eb8552a9596610d87ae90
e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210816221349-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"06505179e0af316cfe7c1c0c3697c38d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210816221349-6487_06505179e0af316cfe7c1c0c3697c38d/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6d024a967569bd77936fb53bcfe77def7cd373ca66f8ed4e541d45c068932eae/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-20210816221349-6487_kube-system_06505179e0af316cfe7c1c0c3697c38d_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a055e0d3dc6de9710363847b318bc66
81a78c24731d7a288185b775afb8a0c65","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-20210816221349-6487_kube-system_06505179e0af316cfe7c1c0c3697c38d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/06505179e0af316cfe7c1c0c3697c38d/containers/kube-apiserver/c36896cf\",\"readonly\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/06505179e0af316cfe7c1c0c3697c38d/etc-hosts\",\"readonly\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_
path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"06505179e0af316cfe7c1c0c3697c38d","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"06505179e0af316cfe7c1c0c3697c38d","kubernetes.io/config.seen":"2021-08-16T22:14:07.022771014Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0816 22:15:40.768287  203052 cri.go:113] list returned 16 containers
	I0816 22:15:40.768305  203052 cri.go:116] container: {ID:1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04 Status:paused}
	I0816 22:15:40.768315  203052 cri.go:122] skipping {1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04 paused}: state = "paused", want "running"
	I0816 22:15:40.768324  203052 cri.go:116] container: {ID:1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe Status:paused}
	I0816 22:15:40.768329  203052 cri.go:122] skipping {1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe paused}: state = "paused", want "running"
	I0816 22:15:40.768333  203052 cri.go:116] container: {ID:2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee Status:running}
	I0816 22:15:40.768338  203052 cri.go:116] container: {ID:4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8 Status:running}
	I0816 22:15:40.768346  203052 cri.go:118] skipping 4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8 - not in ps
	I0816 22:15:40.768360  203052 cri.go:116] container: {ID:4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1 Status:running}
	I0816 22:15:40.768365  203052 cri.go:118] skipping 4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1 - not in ps
	I0816 22:15:40.768368  203052 cri.go:116] container: {ID:5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df Status:running}
	I0816 22:15:40.768373  203052 cri.go:118] skipping 5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df - not in ps
	I0816 22:15:40.768376  203052 cri.go:116] container: {ID:8a5626e3acb8d6e6512e7bc8e176d7f98cbab112b7311beb2017dc277e1b04c1 Status:running}
	I0816 22:15:40.768380  203052 cri.go:116] container: {ID:97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1 Status:running}
	I0816 22:15:40.768385  203052 cri.go:118] skipping 97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1 - not in ps
	I0816 22:15:40.768388  203052 cri.go:116] container: {ID:a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65 Status:running}
	I0816 22:15:40.768392  203052 cri.go:118] skipping a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65 - not in ps
	I0816 22:15:40.768396  203052 cri.go:116] container: {ID:a3847cf5a7a0a50db95228eb59c9b2f0fd2f919f5d3ca5657533960a22d80eeb Status:running}
	I0816 22:15:40.768400  203052 cri.go:116] container: {ID:a65e43c156f4fceb68516be869bf03e5d844bd50bf1c543882409696da7e44b7 Status:running}
	I0816 22:15:40.768404  203052 cri.go:116] container: {ID:ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7 Status:running}
	I0816 22:15:40.768408  203052 cri.go:118] skipping ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7 - not in ps
	I0816 22:15:40.768411  203052 cri.go:116] container: {ID:ba2e9dd72df0118186b262b3a681bbc49788610178d61aaf44d6d85a7e156870 Status:running}
	I0816 22:15:40.768415  203052 cri.go:116] container: {ID:c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292 Status:running}
	I0816 22:15:40.768420  203052 cri.go:118] skipping c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292 - not in ps
	I0816 22:15:40.768423  203052 cri.go:116] container: {ID:d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b Status:running}
	I0816 22:15:40.768427  203052 cri.go:118] skipping d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b - not in ps
	I0816 22:15:40.768430  203052 cri.go:116] container: {ID:e812d329ba697b1e1640c0e4b259cdaef06f8cb4498e9c761789a26059eb18ef Status:running}
	I0816 22:15:40.768463  203052 ssh_runner.go:149] Run: sudo runc pause 2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee
	I0816 22:15:40.783267  203052 ssh_runner.go:149] Run: sudo runc pause 2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee 8a5626e3acb8d6e6512e7bc8e176d7f98cbab112b7311beb2017dc277e1b04c1
	I0816 22:15:40.799640  203052 out.go:177] 
	W0816 22:15:40.799807  203052 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc pause 2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee 8a5626e3acb8d6e6512e7bc8e176d7f98cbab112b7311beb2017dc277e1b04c1: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:15:40Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc pause 2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee 8a5626e3acb8d6e6512e7bc8e176d7f98cbab112b7311beb2017dc277e1b04c1: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:15:40Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0816 22:15:40.799825  203052 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0816 22:15:40.802360  203052 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0816 22:15:40.803825  203052 out.go:177] 

                                                
                                                
** /stderr **
pause_test.go:109: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-20210816221349-6487 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210816221349-6487
helpers_test.go:236: (dbg) docker inspect pause-20210816221349-6487:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d",
	        "Created": "2021-08-16T22:13:50.947309762Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 180330,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-16T22:13:51.51454931Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/hostname",
	        "HostsPath": "/var/lib/docker/containers/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/hosts",
	        "LogPath": "/var/lib/docker/containers/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d-json.log",
	        "Name": "/pause-20210816221349-6487",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210816221349-6487:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210816221349-6487",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/699c14ab7b6f1c0dacf602df13a2fa2eb5acd9541bc64b4ed2e359bf1166606e-init/diff:/var/lib/docker/overlay2/26ccb05b45c39eb8fb9a222c35182ae0b2281818311893cc1c820d93f21711ae/diff:/var/lib/docker/overlay2/cd717b1332cc33945d2b9d777346faa34d6bdfa47023daf36a4373532eccf421/diff:/var/lib/docker/overlay2/d9b45e6cc99a52e8f81dfe312587e9103ceac6c138634c90ec0d647cd90dcdc6/diff:/var/lib/docker/overlay2/253a269adcaa7871d92fb351ceb4d32421104cd62f741ba1c63d8e3f2e19a0b5/diff:/var/lib/docker/overlay2/42adedc2a706910fdba591e62cbfab57dd03d85b882f6eef38c3e131a0c52bea/diff:/var/lib/docker/overlay2/77ed2008192a16e91da5c6b9ccc6e71cea338a7cb24d331dc695e5f74fb04573/diff:/var/lib/docker/overlay2/29dccf868a296ce0889c929159fd3ac0dcdc00ba6e2ef864b056335954ffab4d/diff:/var/lib/docker/overlay2/592818f0a9ab8c99c2438562d9c5a479083cb08f41ad30ddefa9b493a8098150/diff:/var/lib/docker/overlay2/05fa44df096fbdb60f935bef69e952369e10cd8f1adc570c170c1f08d09743f7/diff:/var/lib/docker/overlay2/65cf48
1c471f231a663edc6331272317a851b0fd3395b2d6be43899f9b5443f9/diff:/var/lib/docker/overlay2/86a49202a7a2abea1a7a57c52f2e19cf991e9319b11dcf7f4105892413e0096e/diff:/var/lib/docker/overlay2/6f1a2a1805e90af512eef163f484935d57b0a0a72a65b5775c38efb5fe937d13/diff:/var/lib/docker/overlay2/f514eb992d78367550d2ce1c020771f984eff047302d6af3dcd71172270c0087/diff:/var/lib/docker/overlay2/cdae383a0302a2ed835985028caad813d6e16c7caf0e146005b47a4a1e94369b/diff:/var/lib/docker/overlay2/1fe02987c4f18db439e026431c44d1b8b233da766c4dcd331a4c20fd91e88748/diff:/var/lib/docker/overlay2/28bb6b828668bb682f3523f9f71fbd429ef6180cd98455625ea014ef4e532b77/diff:/var/lib/docker/overlay2/2878ae01f9dccb219097c2fc5873be677fe8a70e2bd936d25f018483a3d6c1a3/diff:/var/lib/docker/overlay2/da86efa9bad2d8b46c88440b1c4864add391f7cc9ee07d4227dd37d76c9ffd85/diff:/var/lib/docker/overlay2/93bf17ee1f2be70c70aa38b3aea9daa3a62e189181d1aa56acc52f5cf90f7436/diff:/var/lib/docker/overlay2/e8db603cd7ef789cc310bf7782375ee624d80ac0bceb3a3075900e67540c152a/diff:/var/lib/d
ocker/overlay2/1ba1c50d72480f49b835b34db6c3b21bccdf6530a8201631099a7e287dbf94a6/diff:/var/lib/docker/overlay2/2eeb0aecf508bc4a797d240b330bfa5b54b84a085737a96dad9faf9ea129d4ce/diff:/var/lib/docker/overlay2/40ba0dc970df76cce520350594f86384e121b47a190106559752e8f7cf138540/diff:/var/lib/docker/overlay2/8b80e2d0e53fb136919fd9ca7d4d198a92c324a19aa1827fcd672a3aa49a0dcc/diff:/var/lib/docker/overlay2/bd1d10c9a8876242ad61e3841c439177c3ae3a1967e5b053729107676134c622/diff:/var/lib/docker/overlay2/1f65ee5880c0ce6e027f2e72118cb956fcdcba9bb0806d98de6532bcbdfac6dd/diff:/var/lib/docker/overlay2/8bde131df5f26fa8ad65d4ddcb9ddb84de567427ffeb4aee76487489e530ef77/diff:/var/lib/docker/overlay2/73a68cda04f65298e5bd2581f0f16a8de96cd42145968d5c39b0c8ae3228a423/diff:/var/lib/docker/overlay2/0acfb650c0dfd3b9df005fe48f305b155d61d21914271c258e95bf42636e9bf8/diff:/var/lib/docker/overlay2/33e44b42ef88ffe285cb67e240248a629167dd1c83362d51a67679cbd866c30b/diff:/var/lib/docker/overlay2/c71ddf9e0475e4a3987bf3d616723e9d055a79c4bb6d92cf34fc67be2d9
c5134/diff:/var/lib/docker/overlay2/faece61d8cf45bab42cd499befcdb8d4548229311b35600cced068a3ce186025/diff:/var/lib/docker/overlay2/5e45db6ab7620f8baf25ff5ff01c806cd64575cdc89e5bb2965a25800093d5f9/diff:/var/lib/docker/overlay2/7324fe47bf39776cd61539f09f63fbb6f5e17aedc985bcc48bc12cdaaff4c4e4/diff:/var/lib/docker/overlay2/98a50fa9fa33654b2ed4ce8f7118f04d7004260519abca0d403b2ab337e9c273/diff:/var/lib/docker/overlay2/0e38707c109f3f1c54c7190021d525f898a86b07f2e51c86dc3e6d916eff3ed7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/699c14ab7b6f1c0dacf602df13a2fa2eb5acd9541bc64b4ed2e359bf1166606e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/699c14ab7b6f1c0dacf602df13a2fa2eb5acd9541bc64b4ed2e359bf1166606e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/699c14ab7b6f1c0dacf602df13a2fa2eb5acd9541bc64b4ed2e359bf1166606e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-20210816221349-6487",
	                "Source": "/var/lib/docker/volumes/pause-20210816221349-6487/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210816221349-6487",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210816221349-6487",
	                "name.minikube.sigs.k8s.io": "pause-20210816221349-6487",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0570bf3c5e1623f8d98964c6c2afad0bc376f97b81690d2719c8fc8bafd98f8c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32901"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32900"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32897"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32899"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0570bf3c5e16",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210816221349-6487": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "859d383b66a4"
	                    ],
	                    "NetworkID": "394b0b68014ce308c4cac60aecb16a91b93630211f90dc3e79f9040bcf6f53a0",
	                    "EndpointID": "66674d2a7391164faa47236ee3755487b5135a367100c27f1e2bc07dde97d027",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210816221349-6487 -n pause-20210816221349-6487

                                                
                                                
=== CONT  TestPause/serial/Pause
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210816221349-6487 -n pause-20210816221349-6487: exit status 2 (17.340721957s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:15:58.201527  203515 status.go:422] Error apiserver status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210816221349-6487 logs -n 25
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p pause-20210816221349-6487 logs -n 25: exit status 110 (20.877448449s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                   |                 Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                       | scheduled-stop-20210816221012-6487       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:10:12 UTC | Mon, 16 Aug 2021 22:10:44 UTC |
	|         | scheduled-stop-20210816221012-6487       |                                          |         |         |                               |                               |
	|         | --memory=2048 --driver=docker            |                                          |         |         |                               |                               |
	|         | --container-runtime=crio                 |                                          |         |         |                               |                               |
	| stop    | -p                                       | scheduled-stop-20210816221012-6487       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:10:44 UTC | Mon, 16 Aug 2021 22:10:44 UTC |
	|         | scheduled-stop-20210816221012-6487       |                                          |         |         |                               |                               |
	|         | --cancel-scheduled                       |                                          |         |         |                               |                               |
	| stop    | -p                                       | scheduled-stop-20210816221012-6487       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:10:57 UTC | Mon, 16 Aug 2021 22:11:22 UTC |
	|         | scheduled-stop-20210816221012-6487       |                                          |         |         |                               |                               |
	|         | --schedule 5s                            |                                          |         |         |                               |                               |
	| delete  | -p                                       | scheduled-stop-20210816221012-6487       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:11:24 UTC | Mon, 16 Aug 2021 22:11:29 UTC |
	|         | scheduled-stop-20210816221012-6487       |                                          |         |         |                               |                               |
	| delete  | -p                                       | insufficient-storage-20210816221129-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:11:36 UTC | Mon, 16 Aug 2021 22:11:42 UTC |
	|         | insufficient-storage-20210816221129-6487 |                                          |         |         |                               |                               |
	| start   | -p                                       | force-systemd-flag-20210816221142-6487   | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:11:42 UTC | Mon, 16 Aug 2021 22:12:18 UTC |
	|         | force-systemd-flag-20210816221142-6487   |                                          |         |         |                               |                               |
	|         | --memory=2048 --force-systemd            |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker   |                                          |         |         |                               |                               |
	|         |  --container-runtime=crio                |                                          |         |         |                               |                               |
	| delete  | -p                                       | force-systemd-flag-20210816221142-6487   | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:12:18 UTC | Mon, 16 Aug 2021 22:12:21 UTC |
	|         | force-systemd-flag-20210816221142-6487   |                                          |         |         |                               |                               |
	| start   | -p                                       | kubernetes-upgrade-20210816221144-6487   | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:11:44 UTC | Mon, 16 Aug 2021 22:12:34 UTC |
	|         | kubernetes-upgrade-20210816221144-6487   |                                          |         |         |                               |                               |
	|         | --memory=2200                            |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0             |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker   |                                          |         |         |                               |                               |
	|         |  --container-runtime=crio                |                                          |         |         |                               |                               |
	| stop    | -p                                       | kubernetes-upgrade-20210816221144-6487   | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:12:34 UTC | Mon, 16 Aug 2021 22:12:36 UTC |
	|         | kubernetes-upgrade-20210816221144-6487   |                                          |         |         |                               |                               |
	| start   | -p                                       | offline-crio-20210816221142-6487         | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:11:42 UTC | Mon, 16 Aug 2021 22:13:23 UTC |
	|         | offline-crio-20210816221142-6487         |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1                   |                                          |         |         |                               |                               |
	|         | --memory=2048 --wait=true                |                                          |         |         |                               |                               |
	|         | --driver=docker                          |                                          |         |         |                               |                               |
	|         | --container-runtime=crio                 |                                          |         |         |                               |                               |
	| start   | -p                                       | kubernetes-upgrade-20210816221144-6487   | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:12:36 UTC | Mon, 16 Aug 2021 22:13:24 UTC |
	|         | kubernetes-upgrade-20210816221144-6487   |                                          |         |         |                               |                               |
	|         | --memory=2200                            |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0        |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker   |                                          |         |         |                               |                               |
	|         |  --container-runtime=crio                |                                          |         |         |                               |                               |
	| delete  | -p                                       | offline-crio-20210816221142-6487         | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:13:23 UTC | Mon, 16 Aug 2021 22:13:26 UTC |
	|         | offline-crio-20210816221142-6487         |                                          |         |         |                               |                               |
	| start   | -p                                       | kubernetes-upgrade-20210816221144-6487   | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:13:24 UTC | Mon, 16 Aug 2021 22:13:46 UTC |
	|         | kubernetes-upgrade-20210816221144-6487   |                                          |         |         |                               |                               |
	|         | --memory=2200                            |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0        |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker   |                                          |         |         |                               |                               |
	|         |  --container-runtime=crio                |                                          |         |         |                               |                               |
	| delete  | -p                                       | kubernetes-upgrade-20210816221144-6487   | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:13:46 UTC | Mon, 16 Aug 2021 22:13:49 UTC |
	|         | kubernetes-upgrade-20210816221144-6487   |                                          |         |         |                               |                               |
	| start   | -p                                       | missing-upgrade-20210816221142-6487      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:13:44 UTC | Mon, 16 Aug 2021 22:14:50 UTC |
	|         | missing-upgrade-20210816221142-6487      |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=1 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=crio                 |                                          |         |         |                               |                               |
	| delete  | -p                                       | missing-upgrade-20210816221142-6487      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:14:50 UTC | Mon, 16 Aug 2021 22:14:53 UTC |
	|         | missing-upgrade-20210816221142-6487      |                                          |         |         |                               |                               |
	| start   | -p                                       | force-systemd-env-20210816221453-6487    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:14:53 UTC | Mon, 16 Aug 2021 22:15:24 UTC |
	|         | force-systemd-env-20210816221453-6487    |                                          |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=5 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=crio                 |                                          |         |         |                               |                               |
	| delete  | -p                                       | stopped-upgrade-20210816221221-6487      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:22 UTC | Mon, 16 Aug 2021 22:15:25 UTC |
	|         | stopped-upgrade-20210816221221-6487      |                                          |         |         |                               |                               |
	| delete  | -p                                       | force-systemd-env-20210816221453-6487    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:24 UTC | Mon, 16 Aug 2021 22:15:27 UTC |
	|         | force-systemd-env-20210816221453-6487    |                                          |         |         |                               |                               |
	| delete  | -p kubenet-20210816221527-6487           | kubenet-20210816221527-6487              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:27 UTC | Mon, 16 Aug 2021 22:15:27 UTC |
	| delete  | -p flannel-20210816221527-6487           | flannel-20210816221527-6487              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:27 UTC | Mon, 16 Aug 2021 22:15:28 UTC |
	| delete  | -p false-20210816221528-6487             | false-20210816221528-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:28 UTC | Mon, 16 Aug 2021 22:15:28 UTC |
	| start   | -p pause-20210816221349-6487             | pause-20210816221349-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:13:49 UTC | Mon, 16 Aug 2021 22:15:32 UTC |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --install-addons=false                   |                                          |         |         |                               |                               |
	|         | --wait=all --driver=docker               |                                          |         |         |                               |                               |
	|         | --container-runtime=crio                 |                                          |         |         |                               |                               |
	| start   | -p pause-20210816221349-6487             | pause-20210816221349-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:32 UTC | Mon, 16 Aug 2021 22:15:38 UTC |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	|         | -v=1 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=crio                 |                                          |         |         |                               |                               |
	| delete  | -p                                       | running-upgrade-20210816221326-6487      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:52 UTC | Mon, 16 Aug 2021 22:15:55 UTC |
	|         | running-upgrade-20210816221326-6487      |                                          |         |         |                               |                               |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:15:55
	Running on machine: debian-jenkins-agent-13
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:15:55.711235  205051 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:15:55.711320  205051 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:15:55.711324  205051 out.go:311] Setting ErrFile to fd 2...
	I0816 22:15:55.711328  205051 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:15:55.711455  205051 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:15:55.711732  205051 out.go:305] Setting JSON to false
	I0816 22:15:55.747539  205051 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-13","uptime":3323,"bootTime":1629148833,"procs":265,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:15:55.747632  205051 start.go:121] virtualization: kvm guest
	I0816 22:15:55.750660  205051 out.go:177] * [no-preload-20210816221555-6487] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:15:55.752204  205051 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:15:55.750822  205051 notify.go:169] Checking for updates...
	I0816 22:15:55.753731  205051 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:15:55.755094  205051 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:15:55.756529  205051 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:15:55.757027  205051 config.go:177] Loaded profile config "cert-options-20210816221525-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 22:15:55.757117  205051 config.go:177] Loaded profile config "old-k8s-version-20210816221528-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0816 22:15:55.757188  205051 config.go:177] Loaded profile config "pause-20210816221349-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 22:15:55.757228  205051 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:15:55.804394  205051 docker.go:132] docker version: linux-19.03.15
	I0816 22:15:55.804473  205051 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:15:55.885289  205051 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:45 SystemTime:2021-08-16 22:15:55.840601588 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 22:15:55.885375  205051 docker.go:244] overlay module found
	I0816 22:15:55.887205  205051 out.go:177] * Using the docker driver based on user configuration
	I0816 22:15:55.887229  205051 start.go:278] selected driver: docker
	I0816 22:15:55.887235  205051 start.go:751] validating driver "docker" against <nil>
	I0816 22:15:55.887258  205051 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0816 22:15:55.887319  205051 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:15:55.887346  205051 out.go:242] ! Your cgroup does not allow setting memory.
	I0816 22:15:55.888686  205051 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:15:55.889528  205051 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:15:55.972543  205051 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:45 SystemTime:2021-08-16 22:15:55.92568751 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 22:15:55.972657  205051 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0816 22:15:55.972792  205051 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 22:15:55.972811  205051 cni.go:93] Creating CNI manager for ""
	I0816 22:15:55.972821  205051 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:15:55.972829  205051 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 22:15:55.972837  205051 start_flags.go:277] config:
	{Name:no-preload-20210816221555-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210816221555-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:15:55.974973  205051 out.go:177] * Starting control plane node no-preload-20210816221555-6487 in cluster no-preload-20210816221555-6487
	I0816 22:15:55.975019  205051 cache.go:117] Beginning downloading kic base image for docker with crio
	I0816 22:15:55.976621  205051 out.go:177] * Pulling base image ...
	I0816 22:15:55.976653  205051 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0816 22:15:55.976713  205051 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0816 22:15:55.976790  205051 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816221555-6487/config.json ...
	I0816 22:15:55.976817  205051 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816221555-6487/config.json: {Name:mk2fbefce27097f59eef7adfa95ec2752f454ccb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:15:55.976932  205051 cache.go:108] acquiring lock: {Name:mke3d64dcf3270420cc281e6a6befd30594c50fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:15:55.976970  205051 cache.go:108] acquiring lock: {Name:mk5e57436a8282dfcfae97822cf38d63e761cfc7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:15:55.977038  205051 cache.go:108] acquiring lock: {Name:mk8b5cd7b473c3e52a6050458b483dac5b759db5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:15:55.977043  205051 cache.go:108] acquiring lock: {Name:mk05cad4650711c0dc8b82611084ba0487915028 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:15:55.977082  205051 cache.go:108] acquiring lock: {Name:mk0b84fbea34d74cc2da16fdbda169da7718e6bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:15:55.977088  205051 cache.go:108] acquiring lock: {Name:mkaf6b977f4d8726e9f72af6acb90bd88d874f24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:15:55.977122  205051 cache.go:108] acquiring lock: {Name:mkcdf8b8ecbacd813c782d54e6c7afb45d0c081f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:15:55.977169  205051 cache.go:108] acquiring lock: {Name:mkceeaa65d9a0c7ffeb3f51de4a55a7fa06d6162 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:15:55.977180  205051 cache.go:108] acquiring lock: {Name:mkd757956ba096c9c6c2faef405bc87f0df51e15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:15:55.977244  205051 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 exists
	I0816 22:15:55.977266  205051 cache.go:97] cache image "k8s.gcr.io/coredns/coredns:v1.8.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0" took 145.355µs
	I0816 22:15:55.977281  205051 cache.go:81] save to tar file k8s.gcr.io/coredns/coredns:v1.8.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 succeeded
	I0816 22:15:55.977281  205051 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.22.0-rc.0
	I0816 22:15:55.977297  205051 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	I0816 22:15:55.977313  205051 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	I0816 22:15:55.977339  205051 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.22.0-rc.0
	I0816 22:15:55.977345  205051 image.go:133] retrieving image: k8s.gcr.io/etcd:3.4.13-3
	I0816 22:15:55.977359  205051 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0816 22:15:55.977388  205051 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 272.389µs
	I0816 22:15:55.977405  205051 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0816 22:15:55.977232  205051 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0816 22:15:55.977428  205051 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 511.551µs
	I0816 22:15:55.977458  205051 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0816 22:15:55.977155  205051 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0816 22:15:55.977483  205051 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 403.554µs
	I0816 22:15:55.977494  205051 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0816 22:15:55.976929  205051 cache.go:108] acquiring lock: {Name:mkfe80324338b4a8a4207174129f1dc96c573f20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:15:55.977588  205051 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 exists
	I0816 22:15:55.977606  205051 cache.go:97] cache image "k8s.gcr.io/pause:3.4.1" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.4.1" took 688.053µs
	I0816 22:15:55.977628  205051 cache.go:81] save to tar file k8s.gcr.io/pause:3.4.1 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 succeeded
	I0816 22:15:55.978238  205051 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.22.0-rc.0: Error response from daemon: reference does not exist
	I0816 22:15:56.079552  205051 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0816 22:15:56.079588  205051 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0816 22:15:56.079606  205051 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:15:56.079635  205051 start.go:313] acquiring machines lock for no-preload-20210816221555-6487: {Name:mkf7c0efbb44aa0951afe0dbe82e022fcb7e6d84 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:15:56.079764  205051 start.go:317] acquired machines lock for "no-preload-20210816221555-6487" in 106.012µs
	I0816 22:15:56.079797  205051 start.go:89] Provisioning new machine with config: &{Name:no-preload-20210816221555-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210816221555-6487 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0816 22:15:56.081454  205051 start.go:126] createHost starting for "" (driver="docker")
	I0816 22:15:54.062785  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:15:54.562791  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:15:55.062641  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:15:55.562864  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:15:56.063221  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:15:56.562619  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:15:57.063215  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:15:57.563543  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:15:58.063222  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Mon 2021-08-16 22:13:51 UTC, end at Mon 2021-08-16 22:15:59 UTC. --
	Aug 16 22:15:33 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:33.997181209Z" level=info msg="Node configuration value for systemd CollectMode is true"
	Aug 16 22:15:33 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:33.998911132Z" level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.001615431Z" level=info msg="Conmon does support the --sync option"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.001679089Z" level=info msg="No seccomp profile specified, using the internal default"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.001686289Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.006618470Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.009192800Z" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.011666290Z" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.023071034Z" level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.023093501Z" level=warning msg="Default CNI network name kindnet is unchangeable"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.335777529Z" level=info msg="Got pod network &{Name:coredns-558bd4d5db-7wcqt Namespace:kube-system ID:ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7 NetNS:/var/run/netns/72967a55-409d-4e22-a50b-fe735e218d4f Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.336029066Z" level=info msg="About to check CNI network kindnet (type=ptp)"
	Aug 16 22:15:34 pause-20210816221349-6487 systemd[1]: Started Container Runtime Interface for OCI (CRI-O).
	Aug 16 22:15:37 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:37.869390202Z" level=info msg="Running pod sandbox: kube-system/storage-provisioner/POD" id=f027cbdd-a110-4c73-bd17-ec2c625617e8 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.008166152Z" level=info msg="Ran pod sandbox 4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1 with infra container: kube-system/storage-provisioner/POD" id=f027cbdd-a110-4c73-bd17-ec2c625617e8 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.009539171Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3ba0b87b-369b-437f-8093-e60bb62a6e5f name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.010209695Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=3ba0b87b-369b-437f-8093-e60bb62a6e5f name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.010864154Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6a8cdcd8-6743-458d-a5bd-6721cf6293d6 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.011418773Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=6a8cdcd8-6743-458d-a5bd-6721cf6293d6 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.012207175Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b3c53444-4fd4-4816-941c-ae75f44b9aab name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.023341263Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/6f094131e65ab0dac08bd38816173827064bbf3fa41e0cb5285e07440d34effc/merged/etc/passwd: no such file or directory"
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.023470306Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/6f094131e65ab0dac08bd38816173827064bbf3fa41e0cb5285e07440d34effc/merged/etc/group: no such file or directory"
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.144183330Z" level=info msg="Created container 2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee: kube-system/storage-provisioner/storage-provisioner" id=b3c53444-4fd4-4816-941c-ae75f44b9aab name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.144745336Z" level=info msg="Starting container: 2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee" id=b7210b75-09f3-4fd5-8d69-e9dcc6142d7a name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.155275298Z" level=info msg="Started container 2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee: kube-system/storage-provisioner/storage-provisioner" id=b7210b75-09f3-4fd5-8d69-e9dcc6142d7a name=/runtime.v1alpha2.RuntimeService/StartContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	2bd1364ac865c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   21 seconds ago       Running             storage-provisioner       0                   4e41a3650a65f
	a3847cf5a7a0a       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   30 seconds ago       Running             coredns                   0                   ace8d49de7551
	ba2e9dd72df01       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb   About a minute ago   Running             kindnet-cni               0                   d5d9684c84cea
	1b3d3880e345b       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   About a minute ago   Running             kube-proxy                0                   4a42049e95348
	1b4dd675dc4bc       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   About a minute ago   Running             etcd                      0                   c159e3fd639d7
	8a5626e3acb8d       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   About a minute ago   Running             kube-controller-manager   0                   5f516d619d78c
	e812d329ba697       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   About a minute ago   Running             kube-apiserver            0                   a055e0d3dc6de
	a65e43c156f4f       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   About a minute ago   Running             kube-scheduler            0                   97b975cd86e3b
	
	* 
	* ==> coredns [a3847cf5a7a0a50db95228eb59c9b2f0fd2f919f5d3ca5657533960a22d80eeb] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [ +13.078852] cgroup: cgroup2: unknown option "nsdelegate"
	[ +21.279883] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug16 22:14] cgroup: cgroup2: unknown option "nsdelegate"
	[ +19.700170] cgroup: cgroup2: unknown option "nsdelegate"
	[ +26.593185] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 86 42 b6 81 13 ff 08 06        .......B......
	[  +0.000003] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 86 42 b6 81 13 ff 08 06        .......B......
	[  +5.056325] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug16 22:15] IPv4: martian source 10.85.0.3 from 10.85.0.3, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff ae c5 9a 81 f2 bb 08 06        ..............
	[  +4.586677] IPv4: martian source 10.88.0.2 from 10.88.0.2, on dev cni0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 96 44 72 c2 9e eb 08 06        .......Dr.....
	[  +0.000006] IPv4: martian source 10.88.0.2 from 10.88.0.2, on dev eth0
	[  +0.000000] ll header: 00000000: ff ff ff ff ff ff 96 44 72 c2 9e eb 08 06        .......Dr.....
	[  +0.324396] IPv4: martian source 10.88.0.3 from 10.88.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff d2 e9 d5 f7 bd 83 08 06        ..............
	[ +10.087279] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth4a11b195
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f6 40 49 70 a6 2e 08 06        .......@Ip....
	[  +1.336173] cgroup: cgroup2: unknown option "nsdelegate"
	[ +10.800279] IPv4: martian source 10.88.0.4 from 10.88.0.4, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff ce c3 8d 30 44 be 08 06        .........0D...
	[  +0.916901] IPv4: martian source 10.88.0.5 from 10.88.0.5, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff e2 20 c4 62 e7 25 08 06        ....... .b.%!.(MISSING)
	[ +16.836824] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe] <==
	* 2021-08-16 22:14:21.479302 W | etcdserver: read-only range request "key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" " with result "range_response_count:0 size:4" took too long (4.338058844s) to execute
	2021-08-16 22:14:21.479372 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-pause-20210816221349-6487\" " with result "range_response_count:0 size:4" took too long (5.261433612s) to execute
	2021-08-16 22:14:21.479441 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:0 size:4" took too long (5.23712455s) to execute
	2021-08-16 22:14:21.479474 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:0 size:4" took too long (5.175788463s) to execute
	2021-08-16 22:14:21.479613 W | etcdserver: read-only range request "key:\"/registry/minions/pause-20210816221349-6487\" " with result "range_response_count:1 size:4444" took too long (5.261587425s) to execute
	2021-08-16 22:14:21.479683 W | etcdserver: read-only range request "key:\"/registry/priorityclasses/system-node-critical\" " with result "range_response_count:0 size:4" took too long (4.338074095s) to execute
	2021-08-16 22:14:21.479744 W | etcdserver: read-only range request "key:\"/registry/ranges/servicenodeports\" " with result "range_response_count:0 size:4" took too long (5.263168271s) to execute
	2021-08-16 22:14:23.363332 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.000022778s) to execute
	2021-08-16 22:14:23.693334 W | wal: sync duration of 2.214752234s, expected less than 1s
	2021-08-16 22:14:23.700535 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:0 size:4" took too long (2.770548489s) to execute
	2021-08-16 22:14:23.700621 W | etcdserver: read-only range request "key:\"/registry/certificatesigningrequests/csr-76sbr\" " with result "range_response_count:1 size:916" took too long (4.511679407s) to execute
	2021-08-16 22:14:23.707488 W | etcdserver: read-only range request "key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" " with result "range_response_count:0 size:4" took too long (2.220681713s) to execute
	2021-08-16 22:14:23.707931 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-pause-20210816221349-6487\" " with result "range_response_count:0 size:4" took too long (2.217648018s) to execute
	2021-08-16 22:14:23.708160 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:0 size:4" took too long (2.166841468s) to execute
	2021-08-16 22:14:23.708359 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" " with result "range_response_count:0 size:4" took too long (2.218216556s) to execute
	2021-08-16 22:14:23.708455 W | etcdserver: read-only range request "key:\"/registry/resourcequotas/kube-system/\" range_end:\"/registry/resourcequotas/kube-system0\" " with result "range_response_count:0 size:4" took too long (2.217851178s) to execute
	2021-08-16 22:14:24.811296 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:0 size:5" took too long (246.052034ms) to execute
	2021-08-16 22:14:24.811319 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:351" took too long (374.900233ms) to execute
	2021-08-16 22:14:24.811421 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (278.56366ms) to execute
	2021-08-16 22:14:42.629727 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:14:52.246014 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:15:02.245960 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:15:12.246500 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:15:22.245983 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:15:32.246694 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  22:16:18 up 55 min,  0 users,  load average: 5.35, 4.20, 2.34
	Linux pause-20210816221349-6487 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [e812d329ba697b1e1640c0e4b259cdaef06f8cb4498e9c761789a26059eb18ef] <==
	* I0816 22:16:17.544030       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	E0816 22:16:17.552789       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.statusError{state:impl.MessageState{NoUnkeyedLiterals:pragma.NoUnkeyedLiterals{}, DoNotCompare:pragma.DoNotCompare{}, DoNotCopy:pragma.DoNotCopy{}, atomicMessageInfo:(*impl.MessageInfo)(nil)}, sizeCache:0, unknownFields:[]uint8(nil), Code:14, Message:"transport is closing", Details:[]*anypb.Any(nil)}: rpc error: code = Unavailable desc = transport is closing
	I0816 22:16:17.553174       1 trace.go:205] Trace[9719085]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (16-Aug-2021 22:15:44.839) (total time: 32713ms):
	Trace[9719085]: [32.713817449s] [32.713817449s] END
	W0816 22:16:17.592589       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	I0816 22:16:17.629417       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	W0816 22:16:17.746876       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	E0816 22:16:18.185811       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.statusError{state:impl.MessageState{NoUnkeyedLiterals:pragma.NoUnkeyedLiterals{}, DoNotCompare:pragma.DoNotCompare{}, DoNotCopy:pragma.DoNotCopy{}, atomicMessageInfo:(*impl.MessageInfo)(nil)}, sizeCache:0, unknownFields:[]uint8(nil), Code:14, Message:"transport is closing", Details:[]*anypb.Any(nil)}: rpc error: code = Unavailable desc = transport is closing
	I0816 22:16:18.186200       1 trace.go:205] Trace[2049541572]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (16-Aug-2021 22:15:40.191) (total time: 37994ms):
	Trace[2049541572]: [37.994208217s] [37.994208217s] END
	I0816 22:16:18.830827       1 trace.go:205] Trace[1538137161]: "GuaranteedUpdate etcd3" type:*core.Node (16-Aug-2021 22:16:14.070) (total time: 4760ms):
	Trace[1538137161]: [4.760614839s] [4.760614839s] END
	I0816 22:16:18.830858       1 trace.go:205] Trace[1981517256]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (16-Aug-2021 22:16:00.171) (total time: 18659ms):
	Trace[1981517256]: [18.659160576s] [18.659160576s] END
	E0816 22:16:18.830891       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.statusError{state:impl.MessageState{NoUnkeyedLiterals:pragma.NoUnkeyedLiterals{}, DoNotCompare:pragma.DoNotCompare{}, DoNotCopy:pragma.DoNotCopy{}, atomicMessageInfo:(*impl.MessageInfo)(nil)}, sizeCache:0, unknownFields:[]uint8(nil), Code:14, Message:"transport is closing", Details:[]*anypb.Any(nil)}: rpc error: code = Unavailable desc = transport is closing
	E0816 22:16:18.830891       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.statusError{state:impl.MessageState{NoUnkeyedLiterals:pragma.NoUnkeyedLiterals{}, DoNotCompare:pragma.DoNotCompare{}, DoNotCopy:pragma.DoNotCopy{}, atomicMessageInfo:(*impl.MessageInfo)(nil)}, sizeCache:0, unknownFields:[]uint8(nil), Code:14, Message:"transport is closing", Details:[]*anypb.Any(nil)}: rpc error: code = Unavailable desc = transport is closing
	I0816 22:16:18.830910       1 trace.go:205] Trace[1324720898]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (16-Aug-2021 22:15:40.414) (total time: 38416ms):
	Trace[1324720898]: [38.416733645s] [38.416733645s] END
	E0816 22:16:18.830945       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.statusError{state:impl.MessageState{NoUnkeyedLiterals:pragma.NoUnkeyedLiterals{}, DoNotCompare:pragma.DoNotCompare{}, DoNotCopy:pragma.DoNotCopy{}, atomicMessageInfo:(*impl.MessageInfo)(nil)}, sizeCache:0, unknownFields:[]uint8(nil), Code:14, Message:"transport is closing", Details:[]*anypb.Any(nil)}: rpc error: code = Unavailable desc = transport is closing
	I0816 22:16:18.831165       1 trace.go:205] Trace[449371622]: "Update" url:/api/v1/nodes/pause-20210816221349-6487/status,user-agent:kube-controller-manager/v1.21.3 (linux/amd64) kubernetes/ca643a4/system:serviceaccount:kube-system:node-controller,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (16-Aug-2021 22:16:14.069) (total time: 4761ms):
	Trace[449371622]: [4.7612655s] [4.7612655s] END
	I0816 22:16:18.832260       1 trace.go:205] Trace[1827903438]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (16-Aug-2021 22:16:00.171) (total time: 18660ms):
	Trace[1827903438]: [18.660579648s] [18.660579648s] END
	I0816 22:16:18.833342       1 trace.go:205] Trace[1865011755]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (16-Aug-2021 22:15:40.414) (total time: 38419ms):
	Trace[1865011755]: [38.419195292s] [38.419195292s] END
	
	* 
	* ==> kube-controller-manager [8a5626e3acb8d6e6512e7bc8e176d7f98cbab112b7311beb2017dc277e1b04c1] <==
	* I0816 22:14:39.132356       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0816 22:14:39.154048       1 shared_informer.go:247] Caches are synced for deployment 
	I0816 22:14:39.212882       1 shared_informer.go:247] Caches are synced for PV protection 
	I0816 22:14:39.213069       1 shared_informer.go:247] Caches are synced for expand 
	E0816 22:14:39.219789       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"dc93fb77-93a8-43bc-ab91-4a6394531af6", ResourceVersion:"276", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764748866, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001cc8588), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001cc85a0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0xc001c9eca0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001be7740), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001cc8
5b8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001cc85d0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.21.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001c9ece0)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001ceca80), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001ca16d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0003f2070), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001cb4d10)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001ca1728)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	E0816 22:14:39.220433       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"5f52f9b6-8e90-4323-94f0-ed159021c29e", ResourceVersion:"297", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764748866, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists
\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001cc85e8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001cc8600)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001c9ed60), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ki
ndnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001cc8618), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FC
VolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001cc8630), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolume
Source)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001cc8648), EmptyDir:(*v1.Emp
tyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVo
lume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001c9ed80)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001c9edc0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDe
cAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Li
fecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001cecae0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001ca1928), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0003f20e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.Hos
tAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001cb4d60)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001ca1970)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0816 22:14:39.232974       1 shared_informer.go:247] Caches are synced for cronjob 
	I0816 22:14:39.245131       1 shared_informer.go:247] Caches are synced for resource quota 
	I0816 22:14:39.267189       1 shared_informer.go:247] Caches are synced for stateful set 
	I0816 22:14:39.270380       1 shared_informer.go:247] Caches are synced for resource quota 
	I0816 22:14:39.281695       1 shared_informer.go:247] Caches are synced for disruption 
	I0816 22:14:39.281712       1 disruption.go:371] Sending events to api server.
	I0816 22:14:39.304499       1 shared_informer.go:247] Caches are synced for attach detach 
	I0816 22:14:39.550791       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0816 22:14:39.562350       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0816 22:14:39.641950       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-hr2q5"
	I0816 22:14:39.651611       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-7wcqt"
	I0816 22:14:39.724203       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0816 22:14:39.724328       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0816 22:14:39.731288       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0816 22:14:39.736318       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-hr2q5"
	I0816 22:14:44.033494       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	E0816 22:16:18.834007       1 node_lifecycle_controller.go:1107] Error updating node pause-20210816221349-6487: rpc error: code = Unavailable desc = transport is closing
	
	* 
	* ==> kube-proxy [1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04] <==
	* I0816 22:14:40.215578       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0816 22:14:40.215638       1 server_others.go:140] Detected node IP 192.168.49.2
	W0816 22:14:40.215676       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0816 22:14:40.247009       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0816 22:14:40.247045       1 server_others.go:212] Using iptables Proxier.
	I0816 22:14:40.247058       1 server_others.go:219] creating dualStackProxier for iptables.
	W0816 22:14:40.247072       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0816 22:14:40.247479       1 server.go:643] Version: v1.21.3
	I0816 22:14:40.248182       1 config.go:315] Starting service config controller
	I0816 22:14:40.248255       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:14:40.248210       1 config.go:224] Starting endpoint slice config controller
	I0816 22:14:40.248339       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0816 22:14:40.250530       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0816 22:14:40.251756       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:14:40.348781       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0816 22:14:40.348804       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [a65e43c156f4fceb68516be869bf03e5d844bd50bf1c543882409696da7e44b7] <==
	* E0816 22:14:17.590366       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:14:17.693992       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:17.713305       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:17.720174       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:14:19.024758       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:14:19.034977       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:14:19.119270       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:14:19.472008       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:14:19.474924       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:14:19.492908       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:19.552344       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:19.701087       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:19.846725       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:14:20.081603       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:14:20.288654       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 22:14:20.300668       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:14:20.446407       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:20.757057       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:14:23.039661       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:14:23.059708       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:14:23.337106       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:23.637126       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:14:23.867448       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:14:24.219179       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0816 22:14:26.331536       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:13:51 UTC, end at Mon 2021-08-16 22:16:19 UTC. --
	Aug 16 22:14:50 pause-20210816221349-6487 kubelet[1598]: E0816 22:14:50.628289    1598 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0(594ee08b00034d6446168ace5475edf70bf1e13147a0dcb8b757b040060edcf5): failed to set bridge addr: could not add IP address to \"cni0\": permission denied"
	Aug 16 22:14:50 pause-20210816221349-6487 kubelet[1598]: E0816 22:14:50.628372    1598 kuberuntime_sandbox.go:68] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0(594ee08b00034d6446168ace5475edf70bf1e13147a0dcb8b757b040060edcf5): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-7wcqt"
	Aug 16 22:14:50 pause-20210816221349-6487 kubelet[1598]: E0816 22:14:50.628398    1598 kuberuntime_manager.go:790] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0(594ee08b00034d6446168ace5475edf70bf1e13147a0dcb8b757b040060edcf5): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-7wcqt"
	Aug 16 22:14:50 pause-20210816221349-6487 kubelet[1598]: E0816 22:14:50.628484    1598 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-558bd4d5db-7wcqt_kube-system(46c03ad2-6959-421b-83b5-f2f596fc6ec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-558bd4d5db-7wcqt_kube-system(46c03ad2-6959-421b-83b5-f2f596fc6ec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0(594ee08b00034d6446168ace5475edf70bf1e13147a0dcb8b757b040060edcf5): failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied\"" pod="kube-system/coredns-558bd4d5db-7wcqt" podUID=46c03ad2-6959-421b-83b5-f2f596fc6ec6
	Aug 16 22:14:51 pause-20210816221349-6487 kubelet[1598]: E0816 22:14:51.871688    1598 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d\": RecentStats: unable to find data in memory cache]"
	Aug 16 22:15:01 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:01.922738    1598 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d\": RecentStats: unable to find data in memory cache]"
	Aug 16 22:15:11 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:11.972524    1598 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d\": RecentStats: unable to find data in memory cache]"
	Aug 16 22:15:14 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:14.894667    1598 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0(be341e6f8e3fce99f8e3daffabf09cb4c686dccbf7bd4c1da6274a100d9bb1d5): failed to set bridge addr: could not add IP address to \"cni0\": permission denied"
	Aug 16 22:15:14 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:14.894750    1598 kuberuntime_sandbox.go:68] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0(be341e6f8e3fce99f8e3daffabf09cb4c686dccbf7bd4c1da6274a100d9bb1d5): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-7wcqt"
	Aug 16 22:15:14 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:14.894785    1598 kuberuntime_manager.go:790] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0(be341e6f8e3fce99f8e3daffabf09cb4c686dccbf7bd4c1da6274a100d9bb1d5): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-7wcqt"
	Aug 16 22:15:14 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:14.894873    1598 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-558bd4d5db-7wcqt_kube-system(46c03ad2-6959-421b-83b5-f2f596fc6ec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-558bd4d5db-7wcqt_kube-system(46c03ad2-6959-421b-83b5-f2f596fc6ec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0(be341e6f8e3fce99f8e3daffabf09cb4c686dccbf7bd4c1da6274a100d9bb1d5): failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied\"" pod="kube-system/coredns-558bd4d5db-7wcqt" podUID=46c03ad2-6959-421b-83b5-f2f596fc6ec6
	Aug 16 22:15:22 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:22.025359    1598 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d\": RecentStats: unable to find data in memory cache]"
	Aug 16 22:15:32 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:32.078267    1598 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d\": RecentStats: unable to find data in memory cache]"
	Aug 16 22:15:33 pause-20210816221349-6487 kubelet[1598]: W0816 22:15:33.925897    1598 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/var/run/crio/crio.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory". Reconnecting...
	Aug 16 22:15:33 pause-20210816221349-6487 kubelet[1598]: W0816 22:15:33.925902    1598 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/var/run/crio/crio.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory". Reconnecting...
	Aug 16 22:15:34 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:34.735160    1598 remote_runtime.go:207] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="nil"
	Aug 16 22:15:34 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:34.735229    1598 kuberuntime_sandbox.go:223] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Aug 16 22:15:34 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:34.735255    1598 generic.go:205] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Aug 16 22:15:36 pause-20210816221349-6487 kubelet[1598]: W0816 22:15:36.372291    1598 container.go:586] Failed to update stats for container "/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d": /sys/fs/cgroup/cpuset/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/cpuset.cpus found to be empty, continuing to push stats
	Aug 16 22:15:37 pause-20210816221349-6487 kubelet[1598]: I0816 22:15:37.567369    1598 topology_manager.go:187] "Topology Admit Handler"
	Aug 16 22:15:37 pause-20210816221349-6487 kubelet[1598]: I0816 22:15:37.668799    1598 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9dq6\" (UniqueName: \"kubernetes.io/projected/1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b-kube-api-access-p9dq6\") pod \"storage-provisioner\" (UID: \"1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b\") "
	Aug 16 22:15:37 pause-20210816221349-6487 kubelet[1598]: I0816 22:15:37.668851    1598 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b-tmp\") pod \"storage-provisioner\" (UID: \"1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b\") "
	Aug 16 22:15:39 pause-20210816221349-6487 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 16 22:15:39 pause-20210816221349-6487 systemd[1]: kubelet.service: Succeeded.
	Aug 16 22:15:39 pause-20210816221349-6487 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee] <==
	* I0816 22:15:38.165511       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 22:15:38.173044       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 22:15:38.173092       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 22:15:38.180640       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 22:15:38.180766       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20210816221349-6487_6779ff04-c08a-4002-a9c5-68bc190dcd15!
	I0816 22:15:38.180706       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4ed80fce-ba59-4042-adeb-a8987870e830", APIVersion:"v1", ResourceVersion:"509", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20210816221349-6487_6779ff04-c08a-4002-a9c5-68bc190dcd15 became leader
	I0816 22:15:38.280916       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20210816221349-6487_6779ff04-c08a-4002-a9c5-68bc190dcd15!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:16:18.838156  205719 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server: rpc error: code = Unavailable desc = transport is closing
	 output: "\n** stderr ** \nError from server: rpc error: code = Unavailable desc = transport is closing\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210816221349-6487
helpers_test.go:236: (dbg) docker inspect pause-20210816221349-6487:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d",
	        "Created": "2021-08-16T22:13:50.947309762Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 180330,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-16T22:13:51.51454931Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/hostname",
	        "HostsPath": "/var/lib/docker/containers/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/hosts",
	        "LogPath": "/var/lib/docker/containers/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d-json.log",
	        "Name": "/pause-20210816221349-6487",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210816221349-6487:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210816221349-6487",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/699c14ab7b6f1c0dacf602df13a2fa2eb5acd9541bc64b4ed2e359bf1166606e-init/diff:/var/lib/docker/overlay2/26ccb05b45c39eb8fb9a222c35182ae0b2281818311893cc1c820d93f21711ae/diff:/var/lib/docker/overlay2/cd717b1332cc33945d2b9d777346faa34d6bdfa47023daf36a4373532eccf421/diff:/var/lib/docker/overlay2/d9b45e6cc99a52e8f81dfe312587e9103ceac6c138634c90ec0d647cd90dcdc6/diff:/var/lib/docker/overlay2/253a269adcaa7871d92fb351ceb4d32421104cd62f741ba1c63d8e3f2e19a0b5/diff:/var/lib/docker/overlay2/42adedc2a706910fdba591e62cbfab57dd03d85b882f6eef38c3e131a0c52bea/diff:/var/lib/docker/overlay2/77ed2008192a16e91da5c6b9ccc6e71cea338a7cb24d331dc695e5f74fb04573/diff:/var/lib/docker/overlay2/29dccf868a296ce0889c929159fd3ac0dcdc00ba6e2ef864b056335954ffab4d/diff:/var/lib/docker/overlay2/592818f0a9ab8c99c2438562d9c5a479083cb08f41ad30ddefa9b493a8098150/diff:/var/lib/docker/overlay2/05fa44df096fbdb60f935bef69e952369e10cd8f1adc570c170c1f08d09743f7/diff:/var/lib/docker/overlay2/65cf48
1c471f231a663edc6331272317a851b0fd3395b2d6be43899f9b5443f9/diff:/var/lib/docker/overlay2/86a49202a7a2abea1a7a57c52f2e19cf991e9319b11dcf7f4105892413e0096e/diff:/var/lib/docker/overlay2/6f1a2a1805e90af512eef163f484935d57b0a0a72a65b5775c38efb5fe937d13/diff:/var/lib/docker/overlay2/f514eb992d78367550d2ce1c020771f984eff047302d6af3dcd71172270c0087/diff:/var/lib/docker/overlay2/cdae383a0302a2ed835985028caad813d6e16c7caf0e146005b47a4a1e94369b/diff:/var/lib/docker/overlay2/1fe02987c4f18db439e026431c44d1b8b233da766c4dcd331a4c20fd91e88748/diff:/var/lib/docker/overlay2/28bb6b828668bb682f3523f9f71fbd429ef6180cd98455625ea014ef4e532b77/diff:/var/lib/docker/overlay2/2878ae01f9dccb219097c2fc5873be677fe8a70e2bd936d25f018483a3d6c1a3/diff:/var/lib/docker/overlay2/da86efa9bad2d8b46c88440b1c4864add391f7cc9ee07d4227dd37d76c9ffd85/diff:/var/lib/docker/overlay2/93bf17ee1f2be70c70aa38b3aea9daa3a62e189181d1aa56acc52f5cf90f7436/diff:/var/lib/docker/overlay2/e8db603cd7ef789cc310bf7782375ee624d80ac0bceb3a3075900e67540c152a/diff:/var/lib/d
ocker/overlay2/1ba1c50d72480f49b835b34db6c3b21bccdf6530a8201631099a7e287dbf94a6/diff:/var/lib/docker/overlay2/2eeb0aecf508bc4a797d240b330bfa5b54b84a085737a96dad9faf9ea129d4ce/diff:/var/lib/docker/overlay2/40ba0dc970df76cce520350594f86384e121b47a190106559752e8f7cf138540/diff:/var/lib/docker/overlay2/8b80e2d0e53fb136919fd9ca7d4d198a92c324a19aa1827fcd672a3aa49a0dcc/diff:/var/lib/docker/overlay2/bd1d10c9a8876242ad61e3841c439177c3ae3a1967e5b053729107676134c622/diff:/var/lib/docker/overlay2/1f65ee5880c0ce6e027f2e72118cb956fcdcba9bb0806d98de6532bcbdfac6dd/diff:/var/lib/docker/overlay2/8bde131df5f26fa8ad65d4ddcb9ddb84de567427ffeb4aee76487489e530ef77/diff:/var/lib/docker/overlay2/73a68cda04f65298e5bd2581f0f16a8de96cd42145968d5c39b0c8ae3228a423/diff:/var/lib/docker/overlay2/0acfb650c0dfd3b9df005fe48f305b155d61d21914271c258e95bf42636e9bf8/diff:/var/lib/docker/overlay2/33e44b42ef88ffe285cb67e240248a629167dd1c83362d51a67679cbd866c30b/diff:/var/lib/docker/overlay2/c71ddf9e0475e4a3987bf3d616723e9d055a79c4bb6d92cf34fc67be2d9
c5134/diff:/var/lib/docker/overlay2/faece61d8cf45bab42cd499befcdb8d4548229311b35600cced068a3ce186025/diff:/var/lib/docker/overlay2/5e45db6ab7620f8baf25ff5ff01c806cd64575cdc89e5bb2965a25800093d5f9/diff:/var/lib/docker/overlay2/7324fe47bf39776cd61539f09f63fbb6f5e17aedc985bcc48bc12cdaaff4c4e4/diff:/var/lib/docker/overlay2/98a50fa9fa33654b2ed4ce8f7118f04d7004260519abca0d403b2ab337e9c273/diff:/var/lib/docker/overlay2/0e38707c109f3f1c54c7190021d525f898a86b07f2e51c86dc3e6d916eff3ed7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/699c14ab7b6f1c0dacf602df13a2fa2eb5acd9541bc64b4ed2e359bf1166606e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/699c14ab7b6f1c0dacf602df13a2fa2eb5acd9541bc64b4ed2e359bf1166606e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/699c14ab7b6f1c0dacf602df13a2fa2eb5acd9541bc64b4ed2e359bf1166606e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20210816221349-6487",
	                "Source": "/var/lib/docker/volumes/pause-20210816221349-6487/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210816221349-6487",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210816221349-6487",
	                "name.minikube.sigs.k8s.io": "pause-20210816221349-6487",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0570bf3c5e1623f8d98964c6c2afad0bc376f97b81690d2719c8fc8bafd98f8c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32901"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32900"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32897"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32899"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0570bf3c5e16",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210816221349-6487": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "859d383b66a4"
	                    ],
	                    "NetworkID": "394b0b68014ce308c4cac60aecb16a91b93630211f90dc3e79f9040bcf6f53a0",
	                    "EndpointID": "66674d2a7391164faa47236ee3755487b5135a367100c27f1e2bc07dde97d027",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210816221349-6487 -n pause-20210816221349-6487
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210816221349-6487 -n pause-20210816221349-6487: exit status 2 (15.775173772s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:16:34.908238  208552 status.go:422] Error apiserver status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210816221349-6487 logs -n 25

                                                
                                                
=== CONT  TestPause/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p pause-20210816221349-6487 logs -n 25: exit status 110 (1m0.831327709s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                   |                 Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                       | scheduled-stop-20210816221012-6487       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:10:12 UTC | Mon, 16 Aug 2021 22:10:44 UTC |
	|         | scheduled-stop-20210816221012-6487       |                                          |         |         |                               |                               |
	|         | --memory=2048 --driver=docker            |                                          |         |         |                               |                               |
	|         | --container-runtime=crio                 |                                          |         |         |                               |                               |
	| stop    | -p                                       | scheduled-stop-20210816221012-6487       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:10:44 UTC | Mon, 16 Aug 2021 22:10:44 UTC |
	|         | scheduled-stop-20210816221012-6487       |                                          |         |         |                               |                               |
	|         | --cancel-scheduled                       |                                          |         |         |                               |                               |
	| stop    | -p                                       | scheduled-stop-20210816221012-6487       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:10:57 UTC | Mon, 16 Aug 2021 22:11:22 UTC |
	|         | scheduled-stop-20210816221012-6487       |                                          |         |         |                               |                               |
	|         | --schedule 5s                            |                                          |         |         |                               |                               |
	| delete  | -p                                       | scheduled-stop-20210816221012-6487       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:11:24 UTC | Mon, 16 Aug 2021 22:11:29 UTC |
	|         | scheduled-stop-20210816221012-6487       |                                          |         |         |                               |                               |
	| delete  | -p                                       | insufficient-storage-20210816221129-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:11:36 UTC | Mon, 16 Aug 2021 22:11:42 UTC |
	|         | insufficient-storage-20210816221129-6487 |                                          |         |         |                               |                               |
	| start   | -p                                       | force-systemd-flag-20210816221142-6487   | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:11:42 UTC | Mon, 16 Aug 2021 22:12:18 UTC |
	|         | force-systemd-flag-20210816221142-6487   |                                          |         |         |                               |                               |
	|         | --memory=2048 --force-systemd            |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker   |                                          |         |         |                               |                               |
	|         |  --container-runtime=crio                |                                          |         |         |                               |                               |
	| delete  | -p                                       | force-systemd-flag-20210816221142-6487   | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:12:18 UTC | Mon, 16 Aug 2021 22:12:21 UTC |
	|         | force-systemd-flag-20210816221142-6487   |                                          |         |         |                               |                               |
	| start   | -p                                       | kubernetes-upgrade-20210816221144-6487   | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:11:44 UTC | Mon, 16 Aug 2021 22:12:34 UTC |
	|         | kubernetes-upgrade-20210816221144-6487   |                                          |         |         |                               |                               |
	|         | --memory=2200                            |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0             |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker   |                                          |         |         |                               |                               |
	|         |  --container-runtime=crio                |                                          |         |         |                               |                               |
	| stop    | -p                                       | kubernetes-upgrade-20210816221144-6487   | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:12:34 UTC | Mon, 16 Aug 2021 22:12:36 UTC |
	|         | kubernetes-upgrade-20210816221144-6487   |                                          |         |         |                               |                               |
	| start   | -p                                       | offline-crio-20210816221142-6487         | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:11:42 UTC | Mon, 16 Aug 2021 22:13:23 UTC |
	|         | offline-crio-20210816221142-6487         |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1                   |                                          |         |         |                               |                               |
	|         | --memory=2048 --wait=true                |                                          |         |         |                               |                               |
	|         | --driver=docker                          |                                          |         |         |                               |                               |
	|         | --container-runtime=crio                 |                                          |         |         |                               |                               |
	| start   | -p                                       | kubernetes-upgrade-20210816221144-6487   | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:12:36 UTC | Mon, 16 Aug 2021 22:13:24 UTC |
	|         | kubernetes-upgrade-20210816221144-6487   |                                          |         |         |                               |                               |
	|         | --memory=2200                            |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0        |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker   |                                          |         |         |                               |                               |
	|         |  --container-runtime=crio                |                                          |         |         |                               |                               |
	| delete  | -p                                       | offline-crio-20210816221142-6487         | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:13:23 UTC | Mon, 16 Aug 2021 22:13:26 UTC |
	|         | offline-crio-20210816221142-6487         |                                          |         |         |                               |                               |
	| start   | -p                                       | kubernetes-upgrade-20210816221144-6487   | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:13:24 UTC | Mon, 16 Aug 2021 22:13:46 UTC |
	|         | kubernetes-upgrade-20210816221144-6487   |                                          |         |         |                               |                               |
	|         | --memory=2200                            |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0        |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker   |                                          |         |         |                               |                               |
	|         |  --container-runtime=crio                |                                          |         |         |                               |                               |
	| delete  | -p                                       | kubernetes-upgrade-20210816221144-6487   | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:13:46 UTC | Mon, 16 Aug 2021 22:13:49 UTC |
	|         | kubernetes-upgrade-20210816221144-6487   |                                          |         |         |                               |                               |
	| start   | -p                                       | missing-upgrade-20210816221142-6487      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:13:44 UTC | Mon, 16 Aug 2021 22:14:50 UTC |
	|         | missing-upgrade-20210816221142-6487      |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=1 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=crio                 |                                          |         |         |                               |                               |
	| delete  | -p                                       | missing-upgrade-20210816221142-6487      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:14:50 UTC | Mon, 16 Aug 2021 22:14:53 UTC |
	|         | missing-upgrade-20210816221142-6487      |                                          |         |         |                               |                               |
	| start   | -p                                       | force-systemd-env-20210816221453-6487    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:14:53 UTC | Mon, 16 Aug 2021 22:15:24 UTC |
	|         | force-systemd-env-20210816221453-6487    |                                          |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=5 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=crio                 |                                          |         |         |                               |                               |
	| delete  | -p                                       | stopped-upgrade-20210816221221-6487      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:22 UTC | Mon, 16 Aug 2021 22:15:25 UTC |
	|         | stopped-upgrade-20210816221221-6487      |                                          |         |         |                               |                               |
	| delete  | -p                                       | force-systemd-env-20210816221453-6487    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:24 UTC | Mon, 16 Aug 2021 22:15:27 UTC |
	|         | force-systemd-env-20210816221453-6487    |                                          |         |         |                               |                               |
	| delete  | -p kubenet-20210816221527-6487           | kubenet-20210816221527-6487              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:27 UTC | Mon, 16 Aug 2021 22:15:27 UTC |
	| delete  | -p flannel-20210816221527-6487           | flannel-20210816221527-6487              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:27 UTC | Mon, 16 Aug 2021 22:15:28 UTC |
	| delete  | -p false-20210816221528-6487             | false-20210816221528-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:28 UTC | Mon, 16 Aug 2021 22:15:28 UTC |
	| start   | -p pause-20210816221349-6487             | pause-20210816221349-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:13:49 UTC | Mon, 16 Aug 2021 22:15:32 UTC |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --install-addons=false                   |                                          |         |         |                               |                               |
	|         | --wait=all --driver=docker               |                                          |         |         |                               |                               |
	|         | --container-runtime=crio                 |                                          |         |         |                               |                               |
	| start   | -p pause-20210816221349-6487             | pause-20210816221349-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:32 UTC | Mon, 16 Aug 2021 22:15:38 UTC |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	|         | -v=1 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=crio                 |                                          |         |         |                               |                               |
	| delete  | -p                                       | running-upgrade-20210816221326-6487      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:52 UTC | Mon, 16 Aug 2021 22:15:55 UTC |
	|         | running-upgrade-20210816221326-6487      |                                          |         |         |                               |                               |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:15:55
	Running on machine: debian-jenkins-agent-13
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:15:55.711235  205051 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:15:55.711320  205051 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:15:55.711324  205051 out.go:311] Setting ErrFile to fd 2...
	I0816 22:15:55.711328  205051 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:15:55.711455  205051 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:15:55.711732  205051 out.go:305] Setting JSON to false
	I0816 22:15:55.747539  205051 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-13","uptime":3323,"bootTime":1629148833,"procs":265,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:15:55.747632  205051 start.go:121] virtualization: kvm guest
	I0816 22:15:55.750660  205051 out.go:177] * [no-preload-20210816221555-6487] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:15:55.752204  205051 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:15:55.750822  205051 notify.go:169] Checking for updates...
	I0816 22:15:55.753731  205051 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:15:55.755094  205051 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:15:55.756529  205051 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:15:55.757027  205051 config.go:177] Loaded profile config "cert-options-20210816221525-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 22:15:55.757117  205051 config.go:177] Loaded profile config "old-k8s-version-20210816221528-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0816 22:15:55.757188  205051 config.go:177] Loaded profile config "pause-20210816221349-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 22:15:55.757228  205051 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:15:55.804394  205051 docker.go:132] docker version: linux-19.03.15
	I0816 22:15:55.804473  205051 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:15:55.885289  205051 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:45 SystemTime:2021-08-16 22:15:55.840601588 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 22:15:55.885375  205051 docker.go:244] overlay module found
	I0816 22:15:55.887205  205051 out.go:177] * Using the docker driver based on user configuration
	I0816 22:15:55.887229  205051 start.go:278] selected driver: docker
	I0816 22:15:55.887235  205051 start.go:751] validating driver "docker" against <nil>
	I0816 22:15:55.887258  205051 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0816 22:15:55.887319  205051 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:15:55.887346  205051 out.go:242] ! Your cgroup does not allow setting memory.
	I0816 22:15:55.888686  205051 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:15:55.889528  205051 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:15:55.972543  205051 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:45 SystemTime:2021-08-16 22:15:55.92568751 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 22:15:55.972657  205051 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0816 22:15:55.972792  205051 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 22:15:55.972811  205051 cni.go:93] Creating CNI manager for ""
	I0816 22:15:55.972821  205051 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:15:55.972829  205051 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 22:15:55.972837  205051 start_flags.go:277] config:
	{Name:no-preload-20210816221555-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210816221555-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:15:55.974973  205051 out.go:177] * Starting control plane node no-preload-20210816221555-6487 in cluster no-preload-20210816221555-6487
	I0816 22:15:55.975019  205051 cache.go:117] Beginning downloading kic base image for docker with crio
	I0816 22:15:55.976621  205051 out.go:177] * Pulling base image ...
	I0816 22:15:55.976653  205051 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0816 22:15:55.976713  205051 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0816 22:15:55.976790  205051 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816221555-6487/config.json ...
	I0816 22:15:55.976817  205051 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816221555-6487/config.json: {Name:mk2fbefce27097f59eef7adfa95ec2752f454ccb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:15:55.976932  205051 cache.go:108] acquiring lock: {Name:mke3d64dcf3270420cc281e6a6befd30594c50fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:15:55.976970  205051 cache.go:108] acquiring lock: {Name:mk5e57436a8282dfcfae97822cf38d63e761cfc7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:15:55.977038  205051 cache.go:108] acquiring lock: {Name:mk8b5cd7b473c3e52a6050458b483dac5b759db5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:15:55.977043  205051 cache.go:108] acquiring lock: {Name:mk05cad4650711c0dc8b82611084ba0487915028 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:15:55.977082  205051 cache.go:108] acquiring lock: {Name:mk0b84fbea34d74cc2da16fdbda169da7718e6bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:15:55.977088  205051 cache.go:108] acquiring lock: {Name:mkaf6b977f4d8726e9f72af6acb90bd88d874f24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:15:55.977122  205051 cache.go:108] acquiring lock: {Name:mkcdf8b8ecbacd813c782d54e6c7afb45d0c081f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:15:55.977169  205051 cache.go:108] acquiring lock: {Name:mkceeaa65d9a0c7ffeb3f51de4a55a7fa06d6162 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:15:55.977180  205051 cache.go:108] acquiring lock: {Name:mkd757956ba096c9c6c2faef405bc87f0df51e15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:15:55.977244  205051 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 exists
	I0816 22:15:55.977266  205051 cache.go:97] cache image "k8s.gcr.io/coredns/coredns:v1.8.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0" took 145.355µs
	I0816 22:15:55.977281  205051 cache.go:81] save to tar file k8s.gcr.io/coredns/coredns:v1.8.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 succeeded
	I0816 22:15:55.977281  205051 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.22.0-rc.0
	I0816 22:15:55.977297  205051 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	I0816 22:15:55.977313  205051 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	I0816 22:15:55.977339  205051 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.22.0-rc.0
	I0816 22:15:55.977345  205051 image.go:133] retrieving image: k8s.gcr.io/etcd:3.4.13-3
	I0816 22:15:55.977359  205051 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0816 22:15:55.977388  205051 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 272.389µs
	I0816 22:15:55.977405  205051 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0816 22:15:55.977232  205051 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0816 22:15:55.977428  205051 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 511.551µs
	I0816 22:15:55.977458  205051 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0816 22:15:55.977155  205051 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0816 22:15:55.977483  205051 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 403.554µs
	I0816 22:15:55.977494  205051 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0816 22:15:55.976929  205051 cache.go:108] acquiring lock: {Name:mkfe80324338b4a8a4207174129f1dc96c573f20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:15:55.977588  205051 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 exists
	I0816 22:15:55.977606  205051 cache.go:97] cache image "k8s.gcr.io/pause:3.4.1" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.4.1" took 688.053µs
	I0816 22:15:55.977628  205051 cache.go:81] save to tar file k8s.gcr.io/pause:3.4.1 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 succeeded
	I0816 22:15:55.978238  205051 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.22.0-rc.0: Error response from daemon: reference does not exist
	I0816 22:15:56.079552  205051 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0816 22:15:56.079588  205051 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0816 22:15:56.079606  205051 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:15:56.079635  205051 start.go:313] acquiring machines lock for no-preload-20210816221555-6487: {Name:mkf7c0efbb44aa0951afe0dbe82e022fcb7e6d84 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:15:56.079764  205051 start.go:317] acquired machines lock for "no-preload-20210816221555-6487" in 106.012µs
	I0816 22:15:56.079797  205051 start.go:89] Provisioning new machine with config: &{Name:no-preload-20210816221555-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210816221555-6487 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0816 22:15:56.081454  205051 start.go:126] createHost starting for "" (driver="docker")
	I0816 22:15:54.062785  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:15:54.562791  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:15:55.062641  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:15:55.562864  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:15:56.063221  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:15:56.562619  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:15:57.063215  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:15:57.563543  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:15:58.063222  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:15:55.456739  198075 cli_runner.go:115] Run: docker container inspect cert-options-20210816221525-6487 --format={{.State.Status}}
	I0816 22:15:58.498435  198075 cli_runner.go:115] Run: docker container inspect cert-options-20210816221525-6487 --format={{.State.Status}}
	I0816 22:15:56.084687  205051 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0816 22:15:56.084904  205051 start.go:160] libmachine.API.Create for "no-preload-20210816221555-6487" (driver="docker")
	I0816 22:15:56.084931  205051 client.go:168] LocalClient.Create starting
	I0816 22:15:56.085007  205051 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem
	I0816 22:15:56.085041  205051 main.go:130] libmachine: Decoding PEM data...
	I0816 22:15:56.085065  205051 main.go:130] libmachine: Parsing certificate...
	I0816 22:15:56.085210  205051 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem
	I0816 22:15:56.085238  205051 main.go:130] libmachine: Decoding PEM data...
	I0816 22:15:56.085253  205051 main.go:130] libmachine: Parsing certificate...
	I0816 22:15:56.085597  205051 cli_runner.go:115] Run: docker network inspect no-preload-20210816221555-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0816 22:15:56.131350  205051 cli_runner.go:162] docker network inspect no-preload-20210816221555-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0816 22:15:56.131442  205051 network_create.go:255] running [docker network inspect no-preload-20210816221555-6487] to gather additional debugging logs...
	I0816 22:15:56.131462  205051 cli_runner.go:115] Run: docker network inspect no-preload-20210816221555-6487
	W0816 22:15:56.174428  205051 cli_runner.go:162] docker network inspect no-preload-20210816221555-6487 returned with exit code 1
	I0816 22:15:56.174498  205051 network_create.go:258] error running [docker network inspect no-preload-20210816221555-6487]: docker network inspect no-preload-20210816221555-6487: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20210816221555-6487
	I0816 22:15:56.174516  205051 network_create.go:260] output of [docker network inspect no-preload-20210816221555-6487]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20210816221555-6487
	
	** /stderr **
	I0816 22:15:56.174574  205051 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 22:15:56.216642  205051 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-394b0b68014c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:c7:c7:80:dd}}
	I0816 22:15:56.217263  205051 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-4ed2783b447d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:d5:d5:90:49}}
	I0816 22:15:56.217908  205051 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc0005a0080] misses:0}
	I0816 22:15:56.217955  205051 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0816 22:15:56.217967  205051 network_create.go:106] attempt to create docker network no-preload-20210816221555-6487 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0816 22:15:56.218011  205051 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20210816221555-6487
	I0816 22:15:56.272381  205051 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0
	I0816 22:15:56.306152  205051 network_create.go:90] docker network no-preload-20210816221555-6487 192.168.67.0/24 created
	I0816 22:15:56.306182  205051 kic.go:106] calculated static IP "192.168.67.2" for the "no-preload-20210816221555-6487" container
	I0816 22:15:56.306242  205051 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0816 22:15:56.356574  205051 cli_runner.go:115] Run: docker volume create no-preload-20210816221555-6487 --label name.minikube.sigs.k8s.io=no-preload-20210816221555-6487 --label created_by.minikube.sigs.k8s.io=true
	I0816 22:15:56.409556  205051 oci.go:102] Successfully created a docker volume no-preload-20210816221555-6487
	I0816 22:15:56.409639  205051 cli_runner.go:115] Run: docker run --rm --name no-preload-20210816221555-6487-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20210816221555-6487 --entrypoint /usr/bin/test -v no-preload-20210816221555-6487:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0816 22:15:56.749919  205051 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 exists
	I0816 22:15:56.749976  205051 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0" took 773.005954ms
	I0816 22:15:56.750000  205051 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 succeeded
	I0816 22:15:57.230038  205051 image.go:171] found k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 locally: &{UncompressedImageCore:0xc001382010 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:15:57.230093  205051 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0
	I0816 22:15:57.339843  205051 oci.go:106] Successfully prepared a docker volume no-preload-20210816221555-6487
	W0816 22:15:57.339894  205051 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0816 22:15:57.339929  205051 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0816 22:15:57.339944  205051 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0816 22:15:57.339988  205051 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0816 22:15:57.427933  205051 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-20210816221555-6487 --name no-preload-20210816221555-6487 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20210816221555-6487 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-20210816221555-6487 --network no-preload-20210816221555-6487 --ip 192.168.67.2 --volume no-preload-20210816221555-6487:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0816 22:15:59.365752  205051 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-20210816221555-6487 --name no-preload-20210816221555-6487 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20210816221555-6487 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-20210816221555-6487 --network no-preload-20210816221555-6487 --ip 192.168.67.2 --volume no-preload-20210816221555-6487:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6: (1.937738518s)
	I0816 22:15:59.366609  205051 cli_runner.go:115] Run: docker container inspect no-preload-20210816221555-6487 --format={{.State.Running}}
	I0816 22:15:59.428379  205051 cli_runner.go:115] Run: docker container inspect no-preload-20210816221555-6487 --format={{.State.Status}}
	I0816 22:15:59.488087  205051 cli_runner.go:115] Run: docker exec no-preload-20210816221555-6487 stat /var/lib/dpkg/alternatives/iptables
	I0816 22:15:59.666842  205051 oci.go:278] the created container "no-preload-20210816221555-6487" has a running status.
	I0816 22:15:59.666935  205051 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816221555-6487/id_rsa...
	I0816 22:16:00.038259  205051 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816221555-6487/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0816 22:16:00.087556  205051 image.go:171] found k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 locally: &{UncompressedImageCore:0xc001382020 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:16:00.087602  205051 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0
	I0816 22:16:00.175648  205051 image.go:171] found k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 locally: &{UncompressedImageCore:0xc0005a02c8 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:16:00.175691  205051 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0
	I0816 22:16:00.242078  205051 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 exists
	I0816 22:16:00.242125  205051 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0" took 4.265131172s
	I0816 22:16:00.242150  205051 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 succeeded
	I0816 22:16:00.508359  205051 cli_runner.go:115] Run: docker container inspect no-preload-20210816221555-6487 --format={{.State.Status}}
	I0816 22:16:00.560699  205051 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0816 22:16:00.560729  205051 kic_runner.go:115] Args: [docker exec --privileged no-preload-20210816221555-6487 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0816 22:15:59.314025  199410 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.250767317s)
	I0816 22:15:59.563312  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:16:00.062640  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:16:00.562621  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:16:01.063343  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:16:01.562834  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:16:02.063044  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:16:02.563022  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:16:03.062717  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:16:03.562558  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:16:01.538358  198075 cli_runner.go:115] Run: docker container inspect cert-options-20210816221525-6487 --format={{.State.Status}}
	I0816 22:16:04.589595  198075 cli_runner.go:115] Run: docker container inspect cert-options-20210816221525-6487 --format={{.State.Status}}
	I0816 22:16:00.749409  205051 cli_runner.go:115] Run: docker container inspect no-preload-20210816221555-6487 --format={{.State.Status}}
	I0816 22:16:00.810396  205051 machine.go:88] provisioning docker machine ...
	I0816 22:16:00.810435  205051 ubuntu.go:169] provisioning hostname "no-preload-20210816221555-6487"
	I0816 22:16:00.810505  205051 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210816221555-6487
	I0816 22:16:00.856829  205051 main.go:130] libmachine: Using SSH client type: native
	I0816 22:16:00.856997  205051 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32924 <nil> <nil>}
	I0816 22:16:00.857019  205051 main.go:130] libmachine: About to run SSH command:
	sudo hostname no-preload-20210816221555-6487 && echo "no-preload-20210816221555-6487" | sudo tee /etc/hostname
	I0816 22:16:01.007346  205051 main.go:130] libmachine: SSH cmd err, output: <nil>: no-preload-20210816221555-6487
	
	I0816 22:16:01.007434  205051 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210816221555-6487
	I0816 22:16:01.060847  205051 main.go:130] libmachine: Using SSH client type: native
	I0816 22:16:01.061040  205051 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32924 <nil> <nil>}
	I0816 22:16:01.061067  205051 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20210816221555-6487' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20210816221555-6487/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20210816221555-6487' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:16:01.191683  205051 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:16:01.191720  205051 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem ServerCertRemotePath:/e
tc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:16:01.191777  205051 ubuntu.go:177] setting up certificates
	I0816 22:16:01.191788  205051 provision.go:83] configureAuth start
	I0816 22:16:01.191850  205051 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210816221555-6487
	I0816 22:16:01.249799  205051 provision.go:138] copyHostCerts
	I0816 22:16:01.249872  205051 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:16:01.249912  205051 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:16:01.249959  205051 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:16:01.250046  205051 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:16:01.250062  205051 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:16:01.250085  205051 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:16:01.250147  205051 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:16:01.250158  205051 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:16:01.250178  205051 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0816 22:16:01.250270  205051 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.no-preload-20210816221555-6487 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20210816221555-6487]
	I0816 22:16:01.314465  205051 image.go:171] found k8s.gcr.io/etcd:3.4.13-3 locally: &{UncompressedImageCore:0xc000114060 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:16:01.314511  205051 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3
	I0816 22:16:01.636348  205051 provision.go:172] copyRemoteCerts
	I0816 22:16:01.636412  205051 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:16:01.636455  205051 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210816221555-6487
	I0816 22:16:01.681394  205051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32924 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816221555-6487/id_rsa Username:docker}
	I0816 22:16:01.770894  205051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:16:01.787109  205051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0816 22:16:01.802345  205051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 22:16:01.817557  205051 provision.go:86] duration metric: configureAuth took 625.755725ms
	I0816 22:16:01.817582  205051 ubuntu.go:193] setting minikube options for container-runtime
	I0816 22:16:01.817737  205051 config.go:177] Loaded profile config "no-preload-20210816221555-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0816 22:16:01.817868  205051 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210816221555-6487
	I0816 22:16:01.863387  205051 main.go:130] libmachine: Using SSH client type: native
	I0816 22:16:01.863549  205051 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32924 <nil> <nil>}
	I0816 22:16:01.863568  205051 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 22:16:02.283669  205051 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 22:16:02.283702  205051 machine.go:91] provisioned docker machine in 1.473282203s
	I0816 22:16:02.283713  205051 client.go:171] LocalClient.Create took 6.198776575s
	I0816 22:16:02.283726  205051 start.go:168] duration metric: libmachine.API.Create for "no-preload-20210816221555-6487" took 6.198821296s
	I0816 22:16:02.283740  205051 start.go:267] post-start starting for "no-preload-20210816221555-6487" (driver="docker")
	I0816 22:16:02.283749  205051 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:16:02.283819  205051 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:16:02.283865  205051 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210816221555-6487
	I0816 22:16:02.333967  205051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32924 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816221555-6487/id_rsa Username:docker}
	I0816 22:16:02.423758  205051 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:16:02.426376  205051 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0816 22:16:02.426402  205051 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0816 22:16:02.426415  205051 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0816 22:16:02.426422  205051 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0816 22:16:02.426432  205051 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:16:02.426489  205051 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:16:02.426611  205051 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem -> 64872.pem in /etc/ssl/certs
	I0816 22:16:02.426734  205051 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:16:02.433058  205051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:16:02.448723  205051 start.go:270] post-start completed in 164.968532ms
	I0816 22:16:02.449128  205051 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210816221555-6487
	I0816 22:16:02.500031  205051 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816221555-6487/config.json ...
	I0816 22:16:02.500282  205051 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 22:16:02.500331  205051 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210816221555-6487
	I0816 22:16:02.550375  205051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32924 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816221555-6487/id_rsa Username:docker}
	I0816 22:16:02.636127  205051 start.go:129] duration metric: createHost completed in 6.554656902s
	I0816 22:16:02.636165  205051 start.go:80] releasing machines lock for "no-preload-20210816221555-6487", held for 6.55638355s
	I0816 22:16:02.636251  205051 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210816221555-6487
	I0816 22:16:02.678173  205051 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:16:02.678248  205051 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210816221555-6487
	I0816 22:16:02.724463  205051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32924 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816221555-6487/id_rsa Username:docker}
	I0816 22:16:04.062846  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:16:07.638240  198075 cli_runner.go:115] Run: docker container inspect cert-options-20210816221525-6487 --format={{.State.Status}}
	I0816 22:16:06.765130  205051 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 exists
	I0816 22:16:06.865477  205051 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 exists
	I0816 22:16:08.655009  205051 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0" took 12.678014641s
	I0816 22:16:08.655019  205051 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0" took 12.677845219s
	I0816 22:16:08.655039  205051 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 succeeded
	I0816 22:16:08.655042  205051 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 succeeded
	I0816 22:16:09.246266  199410 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (3.328740917s)
	I0816 22:16:12.063529  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:16:12.563481  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:16:13.062728  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:16:13.563265  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:16:14.063386  199410 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:16:14.145268  199410 kubeadm.go:985] duration metric: took 21.786833055s to wait for elevateKubeSystemPrivileges.
	I0816 22:16:14.145295  199410 kubeadm.go:392] StartCluster complete in 35.50569983s
	I0816 22:16:14.145311  199410 settings.go:142] acquiring lock: {Name:mk71c9e00e0a208f5191d6b85d29a074b46503a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:16:14.145392  199410 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:16:14.146526  199410 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:16:14.662858  199410 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20210816221528-6487" rescaled to 1
	I0816 22:16:14.662905  199410 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0816 22:16:14.664977  199410 out.go:177] * Verifying Kubernetes components...
	I0816 22:16:14.662967  199410 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:16:14.665028  199410 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:16:14.662995  199410 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0816 22:16:14.663136  199410 config.go:177] Loaded profile config "old-k8s-version-20210816221528-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0816 22:16:14.665111  199410 addons.go:59] Setting storage-provisioner=true in profile "old-k8s-version-20210816221528-6487"
	I0816 22:16:14.665130  199410 addons.go:135] Setting addon storage-provisioner=true in "old-k8s-version-20210816221528-6487"
	W0816 22:16:14.665137  199410 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:16:14.665170  199410 host.go:66] Checking if "old-k8s-version-20210816221528-6487" exists ...
	I0816 22:16:14.665170  199410 addons.go:59] Setting default-storageclass=true in profile "old-k8s-version-20210816221528-6487"
	I0816 22:16:14.665277  199410 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20210816221528-6487"
	I0816 22:16:14.665585  199410 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210816221528-6487 --format={{.State.Status}}
	I0816 22:16:14.665767  199410 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210816221528-6487 --format={{.State.Status}}
	I0816 22:16:14.716405  199410 addons.go:135] Setting addon default-storageclass=true in "old-k8s-version-20210816221528-6487"
	W0816 22:16:14.716427  199410 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:16:14.716455  199410 host.go:66] Checking if "old-k8s-version-20210816221528-6487" exists ...
	I0816 22:16:14.716932  199410 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210816221528-6487 --format={{.State.Status}}
	I0816 22:16:11.694992  198075 cli_runner.go:115] Run: docker container inspect cert-options-20210816221525-6487 --format={{.State.Status}}
	I0816 22:16:14.722097  199410 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:16:14.722270  199410 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:16:14.722291  199410 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:16:14.722354  199410 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210816221528-6487
	I0816 22:16:14.746553  199410 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:16:14.748746  199410 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20210816221528-6487" to be "Ready" ...
	I0816 22:16:14.752670  199410 node_ready.go:49] node "old-k8s-version-20210816221528-6487" has status "Ready":"True"
	I0816 22:16:14.752688  199410 node_ready.go:38] duration metric: took 3.913977ms waiting for node "old-k8s-version-20210816221528-6487" to be "Ready" ...
	I0816 22:16:14.752699  199410 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:16:14.762888  199410 pod_ready.go:78] waiting up to 6m0s for pod "coredns-fb8b8dccf-sw9nf" in "kube-system" namespace to be "Ready" ...
	I0816 22:16:14.770542  199410 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:16:14.770559  199410 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:16:14.770610  199410 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210816221528-6487
	I0816 22:16:14.772726  199410 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32919 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816221528-6487/id_rsa Username:docker}
	I0816 22:16:14.816727  199410 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32919 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816221528-6487/id_rsa Username:docker}
	I0816 22:16:14.884610  199410 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:16:14.926247  199410 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:16:14.935796  199410 start.go:728] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0816 22:16:15.667020  205051 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 exists
	I0816 22:16:15.667066  205051 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.13-3" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3" took 19.690030978s
	I0816 22:16:15.667088  205051 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.13-3 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 succeeded
	I0816 22:16:15.667113  205051 cache.go:88] Successfully saved all images to host disk.
	I0816 22:16:15.667186  205051 ssh_runner.go:149] Run: systemctl --version
	I0816 22:16:15.671460  205051 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0816 22:16:15.718769  205051 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0816 22:16:15.727454  205051 docker.go:153] disabling docker service ...
	I0816 22:16:15.727510  205051 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:16:15.738332  205051 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:16:15.746858  205051 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:16:15.810077  205051 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:16:15.885801  205051 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:16:15.895254  205051 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:16:15.907343  205051 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0816 22:16:15.914759  205051 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0816 22:16:15.914784  205051 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0816 22:16:15.922376  205051 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:16:15.928255  205051 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:16:15.928301  205051 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:16:15.934843  205051 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:16:15.940659  205051 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:16:15.998753  205051 ssh_runner.go:149] Run: sudo systemctl start crio
	I0816 22:16:16.007416  205051 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 22:16:16.007464  205051 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:16:16.010312  205051 start.go:413] Will wait 60s for crictl version
	I0816 22:16:16.010362  205051 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:16:16.036550  205051 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0816 22:16:16.036623  205051 ssh_runner.go:149] Run: crio --version
	I0816 22:16:16.095577  205051 ssh_runner.go:149] Run: crio --version
	I0816 22:16:15.226577  199410 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0816 22:16:15.226603  199410 addons.go:344] enableAddons completed in 563.626843ms
	I0816 22:16:16.776076  199410 pod_ready.go:102] pod "coredns-fb8b8dccf-sw9nf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:16:15.245611  198075 cli_runner.go:115] Run: docker container inspect cert-options-20210816221525-6487 --format={{.State.Status}}
	I0816 22:16:18.286061  198075 cli_runner.go:115] Run: docker container inspect cert-options-20210816221525-6487 --format={{.State.Status}}
	I0816 22:16:16.157138  205051 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on CRI-O 1.20.3 ...
	I0816 22:16:16.157205  205051 cli_runner.go:115] Run: docker network inspect no-preload-20210816221555-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 22:16:16.194186  205051 ssh_runner.go:149] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0816 22:16:16.197452  205051 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:16:16.206205  205051 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0816 22:16:16.206248  205051 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:16:16.227293  205051 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0". assuming images are not preloaded.
	I0816 22:16:16.227313  205051 cache_images.go:78] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 k8s.gcr.io/kube-proxy:v1.22.0-rc.0 k8s.gcr.io/pause:3.4.1 k8s.gcr.io/etcd:3.4.13-3 k8s.gcr.io/coredns/coredns:v1.8.0 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.1.0 docker.io/kubernetesui/metrics-scraper:v1.0.4]
	I0816 22:16:16.227387  205051 image.go:133] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0816 22:16:16.227390  205051 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.22.0-rc.0
	I0816 22:16:16.227413  205051 image.go:133] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.0
	I0816 22:16:16.227420  205051 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.22.0-rc.0
	I0816 22:16:16.227449  205051 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	I0816 22:16:16.227475  205051 image.go:133] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:16:16.227389  205051 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	I0816 22:16:16.227456  205051 image.go:133] retrieving image: docker.io/kubernetesui/dashboard:v2.1.0
	I0816 22:16:16.227635  205051 image.go:133] retrieving image: k8s.gcr.io/etcd:3.4.13-3
	I0816 22:16:16.227651  205051 image.go:133] retrieving image: k8s.gcr.io/pause:3.4.1
	I0816 22:16:16.228537  205051 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.22.0-rc.0: Error response from daemon: reference does not exist
	I0816 22:16:16.255625  205051 image.go:171] found k8s.gcr.io/pause:3.4.1 locally: &{UncompressedImageCore:0xc001416190 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:16:16.255707  205051 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/pause:3.4.1
	I0816 22:16:16.373353  205051 cache_images.go:106] "k8s.gcr.io/pause:3.4.1" needs transfer: "k8s.gcr.io/pause:3.4.1" does not exist at hash "0f8457a4c2ecaceac160805013dc3c61c63a1ff3dee74a473a36249a748e0253" in container runtime
	I0816 22:16:16.373401  205051 cri.go:205] Removing image: k8s.gcr.io/pause:3.4.1
	I0816 22:16:16.373441  205051 ssh_runner.go:149] Run: which crictl
	I0816 22:16:16.377786  205051 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.4.1
	I0816 22:16:16.408093  205051 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.4.1
	I0816 22:16:16.408181  205051 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.4.1
	I0816 22:16:16.411657  205051 ssh_runner.go:306] existence check for /var/lib/minikube/images/pause_3.4.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.4.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.4.1': No such file or directory
	I0816 22:16:16.411687  205051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 --> /var/lib/minikube/images/pause_3.4.1 (301056 bytes)
	I0816 22:16:16.436687  205051 crio.go:191] Loading image: /var/lib/minikube/images/pause_3.4.1
	I0816 22:16:16.436755  205051 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/pause_3.4.1
	I0816 22:16:16.469966  205051 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.22.0-rc.0
	I0816 22:16:16.603437  205051 image.go:171] found gcr.io/k8s-minikube/storage-provisioner:v5 locally: &{UncompressedImageCore:0xc000114e08 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:16:16.603521  205051 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:16:16.894820  205051 image.go:171] found index.docker.io/kubernetesui/metrics-scraper:v1.0.4 locally: &{UncompressedImageCore:0xc001416010 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:16:16.894927  205051 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0816 22:16:16.950729  205051 image.go:171] found k8s.gcr.io/coredns/coredns:v1.8.0 locally: &{UncompressedImageCore:0xc0006ae0c0 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:16:16.950838  205051 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/coredns/coredns:v1.8.0
	I0816 22:16:16.967526  205051 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 from cache
	I0816 22:16:16.967642  205051 cache_images.go:106] "k8s.gcr.io/kube-proxy:v1.22.0-rc.0" needs transfer: "k8s.gcr.io/kube-proxy:v1.22.0-rc.0" does not exist at hash "ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c" in container runtime
	I0816 22:16:16.967675  205051 cri.go:205] Removing image: k8s.gcr.io/kube-proxy:v1.22.0-rc.0
	I0816 22:16:16.967716  205051 ssh_runner.go:149] Run: which crictl
	I0816 22:16:16.967821  205051 cache_images.go:106] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0816 22:16:16.967841  205051 cri.go:205] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:16:16.967863  205051 ssh_runner.go:149] Run: which crictl
	I0816 22:16:17.022538  205051 cache_images.go:106] "docker.io/kubernetesui/metrics-scraper:v1.0.4" needs transfer: "docker.io/kubernetesui/metrics-scraper:v1.0.4" does not exist at hash "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4" in container runtime
	I0816 22:16:17.022598  205051 cri.go:205] Removing image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0816 22:16:17.022655  205051 ssh_runner.go:149] Run: which crictl
	I0816 22:16:17.054653  205051 cache_images.go:106] "k8s.gcr.io/coredns/coredns:v1.8.0" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.0" does not exist at hash "296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899" in container runtime
	I0816 22:16:17.054704  205051 cri.go:205] Removing image: k8s.gcr.io/coredns/coredns:v1.8.0
	I0816 22:16:17.054744  205051 ssh_runner.go:149] Run: which crictl
	I0816 22:16:17.054841  205051 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:16:17.054907  205051 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.22.0-rc.0
	I0816 22:16:17.054986  205051 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0816 22:16:17.081700  205051 image.go:171] found k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 locally: &{UncompressedImageCore:0xc000010090 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:16:17.081790  205051 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	I0816 22:16:17.100356  205051 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5
	I0816 22:16:17.100440  205051 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0816 22:16:17.100441  205051 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4
	I0816 22:16:17.100526  205051 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0816 22:16:17.100550  205051 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0
	I0816 22:16:17.100553  205051 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.0
	I0816 22:16:17.100600  205051 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.22.0-rc.0
	I0816 22:16:17.154499  205051 cache_images.go:106] "k8s.gcr.io/kube-scheduler:v1.22.0-rc.0" needs transfer: "k8s.gcr.io/kube-scheduler:v1.22.0-rc.0" does not exist at hash "7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75" in container runtime
	I0816 22:16:17.154545  205051 cri.go:205] Removing image: k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	I0816 22:16:17.154585  205051 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0
	I0816 22:16:17.154647  205051 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.0
	I0816 22:16:17.154661  205051 ssh_runner.go:306] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0816 22:16:17.154588  205051 ssh_runner.go:149] Run: which crictl
	I0816 22:16:17.154685  205051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0816 22:16:17.154740  205051 ssh_runner.go:306] existence check for /var/lib/minikube/images/metrics-scraper_v1.0.4: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/metrics-scraper_v1.0.4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/metrics-scraper_v1.0.4': No such file or directory
	I0816 22:16:17.154764  205051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 --> /var/lib/minikube/images/metrics-scraper_v1.0.4 (16022528 bytes)
	I0816 22:16:17.154806  205051 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-proxy_v1.22.0-rc.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.22.0-rc.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.22.0-rc.0': No such file or directory
	I0816 22:16:17.154822  205051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 --> /var/lib/minikube/images/kube-proxy_v1.22.0-rc.0 (35940352 bytes)
	I0816 22:16:17.160699  205051 ssh_runner.go:306] existence check for /var/lib/minikube/images/coredns_v1.8.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_v1.8.0': No such file or directory
	I0816 22:16:17.160726  205051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 --> /var/lib/minikube/images/coredns_v1.8.0 (12946944 bytes)
	I0816 22:16:17.166411  205051 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	I0816 22:16:17.264557  205051 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0
	I0816 22:16:17.264647  205051 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.22.0-rc.0
	I0816 22:16:17.277278  205051 crio.go:191] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0816 22:16:17.277353  205051 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0816 22:16:17.297419  205051 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-scheduler_v1.22.0-rc.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.22.0-rc.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.22.0-rc.0': No such file or directory
	I0816 22:16:17.297456  205051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 --> /var/lib/minikube/images/kube-scheduler_v1.22.0-rc.0 (17478144 bytes)
	I0816 22:16:17.710682  205051 image.go:171] found k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 locally: &{UncompressedImageCore:0xc000010250 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:16:17.710808  205051 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	I0816 22:16:17.797446  205051 image.go:171] found k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 locally: &{UncompressedImageCore:0xc0000106c0 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:16:17.797524  205051 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.22.0-rc.0
	I0816 22:16:18.596283  205051 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.3188988s)
	I0816 22:16:18.596314  205051 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0816 22:16:18.596334  205051 crio.go:191] Loading image: /var/lib/minikube/images/coredns_v1.8.0
	I0816 22:16:18.596378  205051 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.0
	I0816 22:16:18.596401  205051 cache_images.go:106] "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0" needs transfer: "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0" does not exist at hash "b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a" in container runtime
	I0816 22:16:18.596374  205051 cache_images.go:106] "k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0" does not exist at hash "cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c" in container runtime
	I0816 22:16:18.596436  205051 cri.go:205] Removing image: k8s.gcr.io/kube-apiserver:v1.22.0-rc.0
	I0816 22:16:18.596469  205051 ssh_runner.go:149] Run: which crictl
	I0816 22:16:18.596471  205051 cri.go:205] Removing image: k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	I0816 22:16:18.596524  205051 ssh_runner.go:149] Run: which crictl
	I0816 22:16:18.645557  205051 image.go:171] found index.docker.io/kubernetesui/dashboard:v2.1.0 locally: &{UncompressedImageCore:0xc0005a0198 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:16:18.645648  205051 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0
	I0816 22:16:18.941492  205051 image.go:171] found k8s.gcr.io/etcd:3.4.13-3 locally: &{UncompressedImageCore:0xc0000102a8 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:16:18.941569  205051 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.13-3
	I0816 22:16:20.064029  205051 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.0: (1.467625814s)
	I0816 22:16:20.064053  205051 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 from cache
	I0816 22:16:20.064087  205051 crio.go:191] Loading image: /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0816 22:16:20.064120  205051 ssh_runner.go:189] Completed: which crictl: (1.467581194s)
	I0816 22:16:20.064140  205051 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0816 22:16:20.064177  205051 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	I0816 22:16:20.064196  205051 ssh_runner.go:189] Completed: which crictl: (1.467711219s)
	I0816 22:16:20.064236  205051 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0: (1.418572796s)
	I0816 22:16:20.064247  205051 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.22.0-rc.0
	I0816 22:16:20.064279  205051 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.13-3: (1.122696912s)
	I0816 22:16:20.064278  205051 cache_images.go:106] "docker.io/kubernetesui/dashboard:v2.1.0" needs transfer: "docker.io/kubernetesui/dashboard:v2.1.0" does not exist at hash "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db" in container runtime
	I0816 22:16:20.064349  205051 cri.go:205] Removing image: docker.io/kubernetesui/dashboard:v2.1.0
	I0816 22:16:20.064316  205051 cache_images.go:106] "k8s.gcr.io/etcd:3.4.13-3" needs transfer: "k8s.gcr.io/etcd:3.4.13-3" does not exist at hash "d1f2268f5826f365987f29115fc55a710d4fb945d2913108fcbc1335763f7de8" in container runtime
	I0816 22:16:20.064398  205051 ssh_runner.go:149] Run: which crictl
	I0816 22:16:20.064412  205051 cri.go:205] Removing image: k8s.gcr.io/etcd:3.4.13-3
	I0816 22:16:20.064439  205051 ssh_runner.go:149] Run: which crictl
	I0816 22:16:20.089576  205051 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0
	I0816 22:16:20.089664  205051 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.22.0-rc.0
	I0816 22:16:19.274199  199410 pod_ready.go:102] pod "coredns-fb8b8dccf-sw9nf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:16:21.773608  199410 pod_ready.go:102] pod "coredns-fb8b8dccf-sw9nf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:16:21.348027  198075 cli_runner.go:115] Run: docker container inspect cert-options-20210816221525-6487 --format={{.State.Status}}
	I0816 22:16:24.401032  198075 cli_runner.go:115] Run: docker container inspect cert-options-20210816221525-6487 --format={{.State.Status}}
	I0816 22:16:21.340110  205051 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/metrics-scraper_v1.0.4: (1.275947703s)
	I0816 22:16:21.340140  205051 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 from cache
	I0816 22:16:21.340159  205051 ssh_runner.go:189] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.22.0-rc.0: (1.27589792s)
	I0816 22:16:21.340160  205051 crio.go:191] Loading image: /var/lib/minikube/images/kube-scheduler_v1.22.0-rc.0
	I0816 22:16:21.340194  205051 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0
	I0816 22:16:21.340206  205051 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.22.0-rc.0
	I0816 22:16:21.340228  205051 ssh_runner.go:189] Completed: which crictl: (1.275770789s)
	I0816 22:16:21.340252  205051 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.22.0-rc.0
	I0816 22:16:21.340280  205051 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.4.13-3
	I0816 22:16:21.340316  205051 ssh_runner.go:189] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.22.0-rc.0: (1.250637965s)
	I0816 22:16:21.340279  205051 ssh_runner.go:189] Completed: which crictl: (1.275866041s)
	I0816 22:16:21.340340  205051 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-controller-manager_v1.22.0-rc.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.22.0-rc.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.22.0-rc.0': No such file or directory
	I0816 22:16:21.340358  205051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 --> /var/lib/minikube/images/kube-controller-manager_v1.22.0-rc.0 (35534336 bytes)
	I0816 22:16:21.340358  205051 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi docker.io/kubernetesui/dashboard:v2.1.0
	I0816 22:16:23.244153  205051 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.22.0-rc.0: (1.903924741s)
	I0816 22:16:23.244194  205051 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 from cache
	I0816 22:16:23.244210  205051 crio.go:191] Loading image: /var/lib/minikube/images/kube-proxy_v1.22.0-rc.0
	I0816 22:16:23.244239  205051 ssh_runner.go:189] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.4.13-3: (1.903934355s)
	I0816 22:16:23.244252  205051 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.22.0-rc.0
	I0816 22:16:23.244280  205051 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3
	I0816 22:16:23.244324  205051 ssh_runner.go:189] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.22.0-rc.0: (1.904053097s)
	I0816 22:16:23.244353  205051 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.4.13-3
	I0816 22:16:23.244369  205051 ssh_runner.go:189] Completed: sudo /usr/bin/crictl rmi docker.io/kubernetesui/dashboard:v2.1.0: (1.903993605s)
	I0816 22:16:23.244395  205051 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0
	I0816 22:16:23.244355  205051 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-apiserver_v1.22.0-rc.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.22.0-rc.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.22.0-rc.0': No such file or directory
	I0816 22:16:23.244416  205051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 --> /var/lib/minikube/images/kube-apiserver_v1.22.0-rc.0 (37427200 bytes)
	I0816 22:16:23.244450  205051 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/dashboard_v2.1.0
	I0816 22:16:23.774497  199410 pod_ready.go:102] pod "coredns-fb8b8dccf-sw9nf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:16:26.275088  199410 pod_ready.go:102] pod "coredns-fb8b8dccf-sw9nf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:16:27.447759  198075 cli_runner.go:115] Run: docker container inspect cert-options-20210816221525-6487 --format={{.State.Status}}
	I0816 22:16:26.247836  205051 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.22.0-rc.0: (3.003565436s)
	I0816 22:16:26.247862  205051 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 from cache
	I0816 22:16:26.247891  205051 crio.go:191] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.22.0-rc.0
	I0816 22:16:26.247924  205051 ssh_runner.go:189] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.4.13-3: (3.003528989s)
	I0816 22:16:26.247955  205051 ssh_runner.go:306] existence check for /var/lib/minikube/images/etcd_3.4.13-3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.4.13-3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.4.13-3': No such file or directory
	I0816 22:16:26.247971  205051 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.22.0-rc.0
	I0816 22:16:26.247974  205051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 --> /var/lib/minikube/images/etcd_3.4.13-3 (98442752 bytes)
	I0816 22:16:26.248002  205051 ssh_runner.go:189] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/dashboard_v2.1.0: (3.003534812s)
	I0816 22:16:26.248028  205051 ssh_runner.go:306] existence check for /var/lib/minikube/images/dashboard_v2.1.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/dashboard_v2.1.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/dashboard_v2.1.0': No such file or directory
	I0816 22:16:26.248048  205051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 --> /var/lib/minikube/images/dashboard_v2.1.0 (67993600 bytes)
	I0816 22:16:29.343687  205051 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.22.0-rc.0: (3.095691199s)
	I0816 22:16:29.343719  205051 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 from cache
	I0816 22:16:29.343746  205051 crio.go:191] Loading image: /var/lib/minikube/images/kube-apiserver_v1.22.0-rc.0
	I0816 22:16:29.343779  205051 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.22.0-rc.0
	I0816 22:16:28.774358  199410 pod_ready.go:102] pod "coredns-fb8b8dccf-sw9nf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:16:31.274081  199410 pod_ready.go:102] pod "coredns-fb8b8dccf-sw9nf" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Mon 2021-08-16 22:13:51 UTC, end at Mon 2021-08-16 22:16:35 UTC. --
	Aug 16 22:15:33 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:33.997181209Z" level=info msg="Node configuration value for systemd CollectMode is true"
	Aug 16 22:15:33 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:33.998911132Z" level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.001615431Z" level=info msg="Conmon does support the --sync option"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.001679089Z" level=info msg="No seccomp profile specified, using the internal default"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.001686289Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.006618470Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.009192800Z" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.011666290Z" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.023071034Z" level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.023093501Z" level=warning msg="Default CNI network name kindnet is unchangeable"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.335777529Z" level=info msg="Got pod network &{Name:coredns-558bd4d5db-7wcqt Namespace:kube-system ID:ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7 NetNS:/var/run/netns/72967a55-409d-4e22-a50b-fe735e218d4f Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.336029066Z" level=info msg="About to check CNI network kindnet (type=ptp)"
	Aug 16 22:15:34 pause-20210816221349-6487 systemd[1]: Started Container Runtime Interface for OCI (CRI-O).
	Aug 16 22:15:37 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:37.869390202Z" level=info msg="Running pod sandbox: kube-system/storage-provisioner/POD" id=f027cbdd-a110-4c73-bd17-ec2c625617e8 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.008166152Z" level=info msg="Ran pod sandbox 4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1 with infra container: kube-system/storage-provisioner/POD" id=f027cbdd-a110-4c73-bd17-ec2c625617e8 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.009539171Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3ba0b87b-369b-437f-8093-e60bb62a6e5f name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.010209695Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=3ba0b87b-369b-437f-8093-e60bb62a6e5f name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.010864154Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6a8cdcd8-6743-458d-a5bd-6721cf6293d6 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.011418773Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=6a8cdcd8-6743-458d-a5bd-6721cf6293d6 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.012207175Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b3c53444-4fd4-4816-941c-ae75f44b9aab name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.023341263Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/6f094131e65ab0dac08bd38816173827064bbf3fa41e0cb5285e07440d34effc/merged/etc/passwd: no such file or directory"
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.023470306Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/6f094131e65ab0dac08bd38816173827064bbf3fa41e0cb5285e07440d34effc/merged/etc/group: no such file or directory"
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.144183330Z" level=info msg="Created container 2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee: kube-system/storage-provisioner/storage-provisioner" id=b3c53444-4fd4-4816-941c-ae75f44b9aab name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.144745336Z" level=info msg="Starting container: 2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee" id=b7210b75-09f3-4fd5-8d69-e9dcc6142d7a name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.155275298Z" level=info msg="Started container 2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee: kube-system/storage-provisioner/storage-provisioner" id=b7210b75-09f3-4fd5-8d69-e9dcc6142d7a name=/runtime.v1alpha2.RuntimeService/StartContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	2bd1364ac865c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   57 seconds ago       Running             storage-provisioner       0                   4e41a3650a65f
	a3847cf5a7a0a       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   About a minute ago   Running             coredns                   0                   ace8d49de7551
	ba2e9dd72df01       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb   About a minute ago   Running             kindnet-cni               0                   d5d9684c84cea
	1b3d3880e345b       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   About a minute ago   Running             kube-proxy                0                   4a42049e95348
	1b4dd675dc4bc       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   2 minutes ago        Running             etcd                      0                   c159e3fd639d7
	8a5626e3acb8d       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   2 minutes ago        Running             kube-controller-manager   0                   5f516d619d78c
	e812d329ba697       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   2 minutes ago        Running             kube-apiserver            0                   a055e0d3dc6de
	a65e43c156f4f       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   2 minutes ago        Running             kube-scheduler            0                   97b975cd86e3b
	
	* 
	* ==> coredns [a3847cf5a7a0a50db95228eb59c9b2f0fd2f919f5d3ca5657533960a22d80eeb] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: ff ff ff ff ff ff ae c5 9a 81 f2 bb 08 06        ..............
	[  +4.586677] IPv4: martian source 10.88.0.2 from 10.88.0.2, on dev cni0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 96 44 72 c2 9e eb 08 06        .......Dr.....
	[  +0.000006] IPv4: martian source 10.88.0.2 from 10.88.0.2, on dev eth0
	[  +0.000000] ll header: 00000000: ff ff ff ff ff ff 96 44 72 c2 9e eb 08 06        .......Dr.....
	[  +0.324396] IPv4: martian source 10.88.0.3 from 10.88.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff d2 e9 d5 f7 bd 83 08 06        ..............
	[ +10.087279] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth4a11b195
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f6 40 49 70 a6 2e 08 06        .......@Ip....
	[  +1.336173] cgroup: cgroup2: unknown option "nsdelegate"
	[ +10.800279] IPv4: martian source 10.88.0.4 from 10.88.0.4, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff ce c3 8d 30 44 be 08 06        .........0D...
	[  +0.916901] IPv4: martian source 10.88.0.5 from 10.88.0.5, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff e2 20 c4 62 e7 25 08 06        ....... .b.%!.(MISSING)
	[ +16.836824] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug16 22:16] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 1e 0d 8e 96 3e 98 08 06        ..........>...
	[  +0.000005] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 1e 0d 8e 96 3e 98 08 06        ..........>...
	[ +22.602431] IPv4: martian source 10.85.0.3 from 10.85.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 56 04 89 bc a0 e4 08 06        ......V.......
	[Aug16 22:17] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth10a9e10e
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 3e b0 04 52 84 25 08 06        ......>..R.%!.(MISSING)
	[ +10.727254] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev vethc7a1d171
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 32 54 14 0d 99 8e 08 06        ......2T......
	
	* 
	* ==> etcd [1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe] <==
	* 2021-08-16 22:14:21.479302 W | etcdserver: read-only range request "key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" " with result "range_response_count:0 size:4" took too long (4.338058844s) to execute
	2021-08-16 22:14:21.479372 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-pause-20210816221349-6487\" " with result "range_response_count:0 size:4" took too long (5.261433612s) to execute
	2021-08-16 22:14:21.479441 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:0 size:4" took too long (5.23712455s) to execute
	2021-08-16 22:14:21.479474 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:0 size:4" took too long (5.175788463s) to execute
	2021-08-16 22:14:21.479613 W | etcdserver: read-only range request "key:\"/registry/minions/pause-20210816221349-6487\" " with result "range_response_count:1 size:4444" took too long (5.261587425s) to execute
	2021-08-16 22:14:21.479683 W | etcdserver: read-only range request "key:\"/registry/priorityclasses/system-node-critical\" " with result "range_response_count:0 size:4" took too long (4.338074095s) to execute
	2021-08-16 22:14:21.479744 W | etcdserver: read-only range request "key:\"/registry/ranges/servicenodeports\" " with result "range_response_count:0 size:4" took too long (5.263168271s) to execute
	2021-08-16 22:14:23.363332 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.000022778s) to execute
	2021-08-16 22:14:23.693334 W | wal: sync duration of 2.214752234s, expected less than 1s
	2021-08-16 22:14:23.700535 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:0 size:4" took too long (2.770548489s) to execute
	2021-08-16 22:14:23.700621 W | etcdserver: read-only range request "key:\"/registry/certificatesigningrequests/csr-76sbr\" " with result "range_response_count:1 size:916" took too long (4.511679407s) to execute
	2021-08-16 22:14:23.707488 W | etcdserver: read-only range request "key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" " with result "range_response_count:0 size:4" took too long (2.220681713s) to execute
	2021-08-16 22:14:23.707931 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-pause-20210816221349-6487\" " with result "range_response_count:0 size:4" took too long (2.217648018s) to execute
	2021-08-16 22:14:23.708160 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:0 size:4" took too long (2.166841468s) to execute
	2021-08-16 22:14:23.708359 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" " with result "range_response_count:0 size:4" took too long (2.218216556s) to execute
	2021-08-16 22:14:23.708455 W | etcdserver: read-only range request "key:\"/registry/resourcequotas/kube-system/\" range_end:\"/registry/resourcequotas/kube-system0\" " with result "range_response_count:0 size:4" took too long (2.217851178s) to execute
	2021-08-16 22:14:24.811296 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:0 size:5" took too long (246.052034ms) to execute
	2021-08-16 22:14:24.811319 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:351" took too long (374.900233ms) to execute
	2021-08-16 22:14:24.811421 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (278.56366ms) to execute
	2021-08-16 22:14:42.629727 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:14:52.246014 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:15:02.245960 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:15:12.246500 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:15:22.245983 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:15:32.246694 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  22:17:35 up 57 min,  0 users,  load average: 3.37, 3.82, 2.35
	Linux pause-20210816221349-6487 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [e812d329ba697b1e1640c0e4b259cdaef06f8cb4498e9c761789a26059eb18ef] <==
	* W0816 22:17:27.916562       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:17:28.759205       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:17:28.930328       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:17:29.216397       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:17:29.364466       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:17:29.415459       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:17:29.420702       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:17:32.055075       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:17:32.139435       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:17:32.620780       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:17:33.434805       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	E0816 22:17:33.747299       1 repair.go:75] unable to refresh the port allocations: rpc error: code = Unavailable desc = transport is closing
	E0816 22:17:33.752324       1 repair.go:118] unable to refresh the service IP block: rpc error: code = Unavailable desc = transport is closing
	I0816 22:17:33.769135       1 trace.go:205] Trace[1933917849]: "Create" url:/api/v1/namespaces,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (16-Aug-2021 22:17:23.768) (total time: 10001ms):
	Trace[1933917849]: [10.00108992s] [10.00108992s] END
	E0816 22:17:33.774096       1 controller.go:203] unable to create required kubernetes system namespace kube-system: Internal error occurred: resource quota evaluation timed out
	W0816 22:17:35.440992       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	I0816 22:17:35.524818       1 trace.go:205] Trace[1453496627]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (16-Aug-2021 22:16:35.524) (total time: 59999ms):
	Trace[1453496627]: [59.999833727s] [59.999833727s] END
	E0816 22:17:35.524852       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
	E0816 22:17:35.524961       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0816 22:17:35.526203       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0816 22:17:35.527332       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0816 22:17:35.528739       1 trace.go:205] Trace[1171587911]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (16-Aug-2021 22:16:35.524) (total time: 60003ms):
	Trace[1171587911]: [1m0.003774205s] [1m0.003774205s] END
	
	* 
	* ==> kube-controller-manager [8a5626e3acb8d6e6512e7bc8e176d7f98cbab112b7311beb2017dc277e1b04c1] <==
	* I0816 22:14:39.213069       1 shared_informer.go:247] Caches are synced for expand 
	E0816 22:14:39.219789       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"dc93fb77-93a8-43bc-ab91-4a6394531af6", ResourceVersion:"276", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764748866, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001cc8588), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001cc85a0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0xc001c9eca0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001be7740), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001cc8
5b8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001cc85d0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.21.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001c9ece0)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001ceca80), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001ca16d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0003f2070), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001cb4d10)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001ca1728)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	E0816 22:14:39.220433       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"5f52f9b6-8e90-4323-94f0-ed159021c29e", ResourceVersion:"297", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764748866, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists
\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001cc85e8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001cc8600)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001c9ed60), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ki
ndnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001cc8618), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FC
VolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001cc8630), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolume
Source)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001cc8648), EmptyDir:(*v1.Emp
tyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVo
lume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001c9ed80)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001c9edc0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDe
cAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Li
fecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001cecae0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001ca1928), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0003f20e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.Hos
tAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001cb4d60)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001ca1970)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0816 22:14:39.232974       1 shared_informer.go:247] Caches are synced for cronjob 
	I0816 22:14:39.245131       1 shared_informer.go:247] Caches are synced for resource quota 
	I0816 22:14:39.267189       1 shared_informer.go:247] Caches are synced for stateful set 
	I0816 22:14:39.270380       1 shared_informer.go:247] Caches are synced for resource quota 
	I0816 22:14:39.281695       1 shared_informer.go:247] Caches are synced for disruption 
	I0816 22:14:39.281712       1 disruption.go:371] Sending events to api server.
	I0816 22:14:39.304499       1 shared_informer.go:247] Caches are synced for attach detach 
	I0816 22:14:39.550791       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0816 22:14:39.562350       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0816 22:14:39.641950       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-hr2q5"
	I0816 22:14:39.651611       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-7wcqt"
	I0816 22:14:39.724203       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0816 22:14:39.724328       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0816 22:14:39.731288       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0816 22:14:39.736318       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-hr2q5"
	I0816 22:14:44.033494       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	E0816 22:16:18.834007       1 node_lifecycle_controller.go:1107] Error updating node pause-20210816221349-6487: rpc error: code = Unavailable desc = transport is closing
	E0816 22:17:18.835292       1 node_lifecycle_controller.go:801] Failed while getting a Node to retry updating node health. Probably Node pause-20210816221349-6487 was deleted.
	E0816 22:17:18.835317       1 node_lifecycle_controller.go:806] Update health of Node '' from Controller error: the server was unable to return a response in the time allotted, but may still be processing the request (get nodes pause-20210816221349-6487). Skipping - no pods will be evicted.
	I0816 22:17:23.835603       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: 
	
	* 
	* ==> kube-proxy [1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04] <==
	* I0816 22:14:40.215578       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0816 22:14:40.215638       1 server_others.go:140] Detected node IP 192.168.49.2
	W0816 22:14:40.215676       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0816 22:14:40.247009       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0816 22:14:40.247045       1 server_others.go:212] Using iptables Proxier.
	I0816 22:14:40.247058       1 server_others.go:219] creating dualStackProxier for iptables.
	W0816 22:14:40.247072       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0816 22:14:40.247479       1 server.go:643] Version: v1.21.3
	I0816 22:14:40.248182       1 config.go:315] Starting service config controller
	I0816 22:14:40.248255       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:14:40.248210       1 config.go:224] Starting endpoint slice config controller
	I0816 22:14:40.248339       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0816 22:14:40.250530       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0816 22:14:40.251756       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:14:40.348781       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0816 22:14:40.348804       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [a65e43c156f4fceb68516be869bf03e5d844bd50bf1c543882409696da7e44b7] <==
	* E0816 22:14:17.590366       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:14:17.693992       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:17.713305       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:17.720174       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:14:19.024758       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:14:19.034977       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:14:19.119270       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:14:19.472008       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:14:19.474924       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:14:19.492908       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:19.552344       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:19.701087       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:19.846725       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:14:20.081603       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:14:20.288654       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 22:14:20.300668       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:14:20.446407       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:20.757057       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:14:23.039661       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:14:23.059708       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:14:23.337106       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:23.637126       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:14:23.867448       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:14:24.219179       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0816 22:14:26.331536       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:13:51 UTC, end at Mon 2021-08-16 22:17:35 UTC. --
	Aug 16 22:14:50 pause-20210816221349-6487 kubelet[1598]: E0816 22:14:50.628289    1598 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0(594ee08b00034d6446168ace5475edf70bf1e13147a0dcb8b757b040060edcf5): failed to set bridge addr: could not add IP address to \"cni0\": permission denied"
	Aug 16 22:14:50 pause-20210816221349-6487 kubelet[1598]: E0816 22:14:50.628372    1598 kuberuntime_sandbox.go:68] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0(594ee08b00034d6446168ace5475edf70bf1e13147a0dcb8b757b040060edcf5): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-7wcqt"
	Aug 16 22:14:50 pause-20210816221349-6487 kubelet[1598]: E0816 22:14:50.628398    1598 kuberuntime_manager.go:790] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0(594ee08b00034d6446168ace5475edf70bf1e13147a0dcb8b757b040060edcf5): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-7wcqt"
	Aug 16 22:14:50 pause-20210816221349-6487 kubelet[1598]: E0816 22:14:50.628484    1598 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-558bd4d5db-7wcqt_kube-system(46c03ad2-6959-421b-83b5-f2f596fc6ec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-558bd4d5db-7wcqt_kube-system(46c03ad2-6959-421b-83b5-f2f596fc6ec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0(594ee08b00034d6446168ace5475edf70bf1e13147a0dcb8b757b040060edcf5): failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied\"" pod="kube-system/coredns-558bd4d5db-7wcqt" podUID=46c03ad2-6959-421b-83b5-f2f596fc6ec6
	Aug 16 22:14:51 pause-20210816221349-6487 kubelet[1598]: E0816 22:14:51.871688    1598 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d\": RecentStats: unable to find data in memory cache]"
	Aug 16 22:15:01 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:01.922738    1598 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d\": RecentStats: unable to find data in memory cache]"
	Aug 16 22:15:11 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:11.972524    1598 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d\": RecentStats: unable to find data in memory cache]"
	Aug 16 22:15:14 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:14.894667    1598 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0(be341e6f8e3fce99f8e3daffabf09cb4c686dccbf7bd4c1da6274a100d9bb1d5): failed to set bridge addr: could not add IP address to \"cni0\": permission denied"
	Aug 16 22:15:14 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:14.894750    1598 kuberuntime_sandbox.go:68] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0(be341e6f8e3fce99f8e3daffabf09cb4c686dccbf7bd4c1da6274a100d9bb1d5): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-7wcqt"
	Aug 16 22:15:14 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:14.894785    1598 kuberuntime_manager.go:790] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0(be341e6f8e3fce99f8e3daffabf09cb4c686dccbf7bd4c1da6274a100d9bb1d5): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-7wcqt"
	Aug 16 22:15:14 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:14.894873    1598 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-558bd4d5db-7wcqt_kube-system(46c03ad2-6959-421b-83b5-f2f596fc6ec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-558bd4d5db-7wcqt_kube-system(46c03ad2-6959-421b-83b5-f2f596fc6ec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0(be341e6f8e3fce99f8e3daffabf09cb4c686dccbf7bd4c1da6274a100d9bb1d5): failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied\"" pod="kube-system/coredns-558bd4d5db-7wcqt" podUID=46c03ad2-6959-421b-83b5-f2f596fc6ec6
	Aug 16 22:15:22 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:22.025359    1598 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d\": RecentStats: unable to find data in memory cache]"
	Aug 16 22:15:32 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:32.078267    1598 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d\": RecentStats: unable to find data in memory cache]"
	Aug 16 22:15:33 pause-20210816221349-6487 kubelet[1598]: W0816 22:15:33.925897    1598 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/var/run/crio/crio.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory". Reconnecting...
	Aug 16 22:15:33 pause-20210816221349-6487 kubelet[1598]: W0816 22:15:33.925902    1598 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/var/run/crio/crio.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory". Reconnecting...
	Aug 16 22:15:34 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:34.735160    1598 remote_runtime.go:207] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="nil"
	Aug 16 22:15:34 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:34.735229    1598 kuberuntime_sandbox.go:223] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Aug 16 22:15:34 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:34.735255    1598 generic.go:205] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Aug 16 22:15:36 pause-20210816221349-6487 kubelet[1598]: W0816 22:15:36.372291    1598 container.go:586] Failed to update stats for container "/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d": /sys/fs/cgroup/cpuset/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/cpuset.cpus found to be empty, continuing to push stats
	Aug 16 22:15:37 pause-20210816221349-6487 kubelet[1598]: I0816 22:15:37.567369    1598 topology_manager.go:187] "Topology Admit Handler"
	Aug 16 22:15:37 pause-20210816221349-6487 kubelet[1598]: I0816 22:15:37.668799    1598 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9dq6\" (UniqueName: \"kubernetes.io/projected/1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b-kube-api-access-p9dq6\") pod \"storage-provisioner\" (UID: \"1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b\") "
	Aug 16 22:15:37 pause-20210816221349-6487 kubelet[1598]: I0816 22:15:37.668851    1598 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b-tmp\") pod \"storage-provisioner\" (UID: \"1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b\") "
	Aug 16 22:15:39 pause-20210816221349-6487 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 16 22:15:39 pause-20210816221349-6487 systemd[1]: kubelet.service: Succeeded.
	Aug 16 22:15:39 pause-20210816221349-6487 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee] <==
	* I0816 22:15:38.165511       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 22:15:38.173044       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 22:15:38.173092       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 22:15:38.180640       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 22:15:38.180766       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20210816221349-6487_6779ff04-c08a-4002-a9c5-68bc190dcd15!
	I0816 22:15:38.180706       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4ed80fce-ba59-4042-adeb-a8987870e830", APIVersion:"v1", ResourceVersion:"509", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20210816221349-6487_6779ff04-c08a-4002-a9c5-68bc190dcd15 became leader
	I0816 22:15:38.280916       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20210816221349-6487_6779ff04-c08a-4002-a9c5-68bc190dcd15!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:17:35.528676  209257 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: "\n** stderr ** \nError from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestPause/serial/Pause (116.87s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (95.54s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20210816221349-6487 --output=json --layout=cluster

                                                
                                                
=== CONT  TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20210816221349-6487 --output=json --layout=cluster: exit status 2 (17.347972989s)

                                                
                                                
-- stdout --
	{"Name":"pause-20210816221349-6487","StatusCode":101,"StatusName":"Pausing","Step":"Pausing","StepDetail":"* Pausing node pause-20210816221349-6487 ...","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20210816221349-6487","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":500,"StatusName":"Error"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:17:53.109384  213579 status.go:422] Error apiserver status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	
	E0816 22:17:53.109724  213579 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax
	E0816 22:17:53.109754  213579 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax
	E0816 22:17:53.109776  213579 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax

                                                
                                                
** /stderr **
pause_test.go:190: incorrect status code: 101
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/VerifyStatus]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210816221349-6487
helpers_test.go:236: (dbg) docker inspect pause-20210816221349-6487:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d",
	        "Created": "2021-08-16T22:13:50.947309762Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 180330,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-16T22:13:51.51454931Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/hostname",
	        "HostsPath": "/var/lib/docker/containers/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/hosts",
	        "LogPath": "/var/lib/docker/containers/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d-json.log",
	        "Name": "/pause-20210816221349-6487",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210816221349-6487:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210816221349-6487",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/699c14ab7b6f1c0dacf602df13a2fa2eb5acd9541bc64b4ed2e359bf1166606e-init/diff:/var/lib/docker/overlay2/26ccb05b45c39eb8fb9a222c35182ae0b2281818311893cc1c820d93f21711ae/diff:/var/lib/docker/overlay2/cd717b1332cc33945d2b9d777346faa34d6bdfa47023daf36a4373532eccf421/diff:/var/lib/docker/overlay2/d9b45e6cc99a52e8f81dfe312587e9103ceac6c138634c90ec0d647cd90dcdc6/diff:/var/lib/docker/overlay2/253a269adcaa7871d92fb351ceb4d32421104cd62f741ba1c63d8e3f2e19a0b5/diff:/var/lib/docker/overlay2/42adedc2a706910fdba591e62cbfab57dd03d85b882f6eef38c3e131a0c52bea/diff:/var/lib/docker/overlay2/77ed2008192a16e91da5c6b9ccc6e71cea338a7cb24d331dc695e5f74fb04573/diff:/var/lib/docker/overlay2/29dccf868a296ce0889c929159fd3ac0dcdc00ba6e2ef864b056335954ffab4d/diff:/var/lib/docker/overlay2/592818f0a9ab8c99c2438562d9c5a479083cb08f41ad30ddefa9b493a8098150/diff:/var/lib/docker/overlay2/05fa44df096fbdb60f935bef69e952369e10cd8f1adc570c170c1f08d09743f7/diff:/var/lib/docker/overlay2/65cf48
1c471f231a663edc6331272317a851b0fd3395b2d6be43899f9b5443f9/diff:/var/lib/docker/overlay2/86a49202a7a2abea1a7a57c52f2e19cf991e9319b11dcf7f4105892413e0096e/diff:/var/lib/docker/overlay2/6f1a2a1805e90af512eef163f484935d57b0a0a72a65b5775c38efb5fe937d13/diff:/var/lib/docker/overlay2/f514eb992d78367550d2ce1c020771f984eff047302d6af3dcd71172270c0087/diff:/var/lib/docker/overlay2/cdae383a0302a2ed835985028caad813d6e16c7caf0e146005b47a4a1e94369b/diff:/var/lib/docker/overlay2/1fe02987c4f18db439e026431c44d1b8b233da766c4dcd331a4c20fd91e88748/diff:/var/lib/docker/overlay2/28bb6b828668bb682f3523f9f71fbd429ef6180cd98455625ea014ef4e532b77/diff:/var/lib/docker/overlay2/2878ae01f9dccb219097c2fc5873be677fe8a70e2bd936d25f018483a3d6c1a3/diff:/var/lib/docker/overlay2/da86efa9bad2d8b46c88440b1c4864add391f7cc9ee07d4227dd37d76c9ffd85/diff:/var/lib/docker/overlay2/93bf17ee1f2be70c70aa38b3aea9daa3a62e189181d1aa56acc52f5cf90f7436/diff:/var/lib/docker/overlay2/e8db603cd7ef789cc310bf7782375ee624d80ac0bceb3a3075900e67540c152a/diff:/var/lib/d
ocker/overlay2/1ba1c50d72480f49b835b34db6c3b21bccdf6530a8201631099a7e287dbf94a6/diff:/var/lib/docker/overlay2/2eeb0aecf508bc4a797d240b330bfa5b54b84a085737a96dad9faf9ea129d4ce/diff:/var/lib/docker/overlay2/40ba0dc970df76cce520350594f86384e121b47a190106559752e8f7cf138540/diff:/var/lib/docker/overlay2/8b80e2d0e53fb136919fd9ca7d4d198a92c324a19aa1827fcd672a3aa49a0dcc/diff:/var/lib/docker/overlay2/bd1d10c9a8876242ad61e3841c439177c3ae3a1967e5b053729107676134c622/diff:/var/lib/docker/overlay2/1f65ee5880c0ce6e027f2e72118cb956fcdcba9bb0806d98de6532bcbdfac6dd/diff:/var/lib/docker/overlay2/8bde131df5f26fa8ad65d4ddcb9ddb84de567427ffeb4aee76487489e530ef77/diff:/var/lib/docker/overlay2/73a68cda04f65298e5bd2581f0f16a8de96cd42145968d5c39b0c8ae3228a423/diff:/var/lib/docker/overlay2/0acfb650c0dfd3b9df005fe48f305b155d61d21914271c258e95bf42636e9bf8/diff:/var/lib/docker/overlay2/33e44b42ef88ffe285cb67e240248a629167dd1c83362d51a67679cbd866c30b/diff:/var/lib/docker/overlay2/c71ddf9e0475e4a3987bf3d616723e9d055a79c4bb6d92cf34fc67be2d9
c5134/diff:/var/lib/docker/overlay2/faece61d8cf45bab42cd499befcdb8d4548229311b35600cced068a3ce186025/diff:/var/lib/docker/overlay2/5e45db6ab7620f8baf25ff5ff01c806cd64575cdc89e5bb2965a25800093d5f9/diff:/var/lib/docker/overlay2/7324fe47bf39776cd61539f09f63fbb6f5e17aedc985bcc48bc12cdaaff4c4e4/diff:/var/lib/docker/overlay2/98a50fa9fa33654b2ed4ce8f7118f04d7004260519abca0d403b2ab337e9c273/diff:/var/lib/docker/overlay2/0e38707c109f3f1c54c7190021d525f898a86b07f2e51c86dc3e6d916eff3ed7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/699c14ab7b6f1c0dacf602df13a2fa2eb5acd9541bc64b4ed2e359bf1166606e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/699c14ab7b6f1c0dacf602df13a2fa2eb5acd9541bc64b4ed2e359bf1166606e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/699c14ab7b6f1c0dacf602df13a2fa2eb5acd9541bc64b4ed2e359bf1166606e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20210816221349-6487",
	                "Source": "/var/lib/docker/volumes/pause-20210816221349-6487/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210816221349-6487",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210816221349-6487",
	                "name.minikube.sigs.k8s.io": "pause-20210816221349-6487",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0570bf3c5e1623f8d98964c6c2afad0bc376f97b81690d2719c8fc8bafd98f8c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32901"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32900"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32897"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32899"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0570bf3c5e16",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210816221349-6487": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "859d383b66a4"
	                    ],
	                    "NetworkID": "394b0b68014ce308c4cac60aecb16a91b93630211f90dc3e79f9040bcf6f53a0",
	                    "EndpointID": "66674d2a7391164faa47236ee3755487b5135a367100c27f1e2bc07dde97d027",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210816221349-6487 -n pause-20210816221349-6487

                                                
                                                
=== CONT  TestPause/serial/VerifyStatus
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210816221349-6487 -n pause-20210816221349-6487: exit status 2 (17.333589838s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:18:10.485305  215422 status.go:422] Error apiserver status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/VerifyStatus FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/VerifyStatus]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210816221349-6487 logs -n 25

                                                
                                                
=== CONT  TestPause/serial/VerifyStatus
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p pause-20210816221349-6487 logs -n 25: exit status 110 (1m0.786799598s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                | force-systemd-flag-20210816221142-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:12:18 UTC | Mon, 16 Aug 2021 22:12:21 UTC |
	|         | force-systemd-flag-20210816221142-6487            |                                        |         |         |                               |                               |
	| start   | -p                                                | kubernetes-upgrade-20210816221144-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:11:44 UTC | Mon, 16 Aug 2021 22:12:34 UTC |
	|         | kubernetes-upgrade-20210816221144-6487            |                                        |         |         |                               |                               |
	|         | --memory=2200                                     |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker            |                                        |         |         |                               |                               |
	|         |  --container-runtime=crio                         |                                        |         |         |                               |                               |
	| stop    | -p                                                | kubernetes-upgrade-20210816221144-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:12:34 UTC | Mon, 16 Aug 2021 22:12:36 UTC |
	|         | kubernetes-upgrade-20210816221144-6487            |                                        |         |         |                               |                               |
	| start   | -p                                                | offline-crio-20210816221142-6487       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:11:42 UTC | Mon, 16 Aug 2021 22:13:23 UTC |
	|         | offline-crio-20210816221142-6487                  |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                        |         |         |                               |                               |
	|         | --memory=2048 --wait=true                         |                                        |         |         |                               |                               |
	|         | --driver=docker                                   |                                        |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                        |         |         |                               |                               |
	| start   | -p                                                | kubernetes-upgrade-20210816221144-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:12:36 UTC | Mon, 16 Aug 2021 22:13:24 UTC |
	|         | kubernetes-upgrade-20210816221144-6487            |                                        |         |         |                               |                               |
	|         | --memory=2200                                     |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker            |                                        |         |         |                               |                               |
	|         |  --container-runtime=crio                         |                                        |         |         |                               |                               |
	| delete  | -p                                                | offline-crio-20210816221142-6487       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:13:23 UTC | Mon, 16 Aug 2021 22:13:26 UTC |
	|         | offline-crio-20210816221142-6487                  |                                        |         |         |                               |                               |
	| start   | -p                                                | kubernetes-upgrade-20210816221144-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:13:24 UTC | Mon, 16 Aug 2021 22:13:46 UTC |
	|         | kubernetes-upgrade-20210816221144-6487            |                                        |         |         |                               |                               |
	|         | --memory=2200                                     |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker            |                                        |         |         |                               |                               |
	|         |  --container-runtime=crio                         |                                        |         |         |                               |                               |
	| delete  | -p                                                | kubernetes-upgrade-20210816221144-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:13:46 UTC | Mon, 16 Aug 2021 22:13:49 UTC |
	|         | kubernetes-upgrade-20210816221144-6487            |                                        |         |         |                               |                               |
	| start   | -p                                                | missing-upgrade-20210816221142-6487    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:13:44 UTC | Mon, 16 Aug 2021 22:14:50 UTC |
	|         | missing-upgrade-20210816221142-6487               |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                        |         |         |                               |                               |
	|         | -v=1 --driver=docker                              |                                        |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                        |         |         |                               |                               |
	| delete  | -p                                                | missing-upgrade-20210816221142-6487    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:14:50 UTC | Mon, 16 Aug 2021 22:14:53 UTC |
	|         | missing-upgrade-20210816221142-6487               |                                        |         |         |                               |                               |
	| start   | -p                                                | force-systemd-env-20210816221453-6487  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:14:53 UTC | Mon, 16 Aug 2021 22:15:24 UTC |
	|         | force-systemd-env-20210816221453-6487             |                                        |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr                   |                                        |         |         |                               |                               |
	|         | -v=5 --driver=docker                              |                                        |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                        |         |         |                               |                               |
	| delete  | -p                                                | stopped-upgrade-20210816221221-6487    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:22 UTC | Mon, 16 Aug 2021 22:15:25 UTC |
	|         | stopped-upgrade-20210816221221-6487               |                                        |         |         |                               |                               |
	| delete  | -p                                                | force-systemd-env-20210816221453-6487  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:24 UTC | Mon, 16 Aug 2021 22:15:27 UTC |
	|         | force-systemd-env-20210816221453-6487             |                                        |         |         |                               |                               |
	| delete  | -p kubenet-20210816221527-6487                    | kubenet-20210816221527-6487            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:27 UTC | Mon, 16 Aug 2021 22:15:27 UTC |
	| delete  | -p flannel-20210816221527-6487                    | flannel-20210816221527-6487            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:27 UTC | Mon, 16 Aug 2021 22:15:28 UTC |
	| delete  | -p false-20210816221528-6487                      | false-20210816221528-6487              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:28 UTC | Mon, 16 Aug 2021 22:15:28 UTC |
	| start   | -p pause-20210816221349-6487                      | pause-20210816221349-6487              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:13:49 UTC | Mon, 16 Aug 2021 22:15:32 UTC |
	|         | --memory=2048                                     |                                        |         |         |                               |                               |
	|         | --install-addons=false                            |                                        |         |         |                               |                               |
	|         | --wait=all --driver=docker                        |                                        |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                        |         |         |                               |                               |
	| start   | -p pause-20210816221349-6487                      | pause-20210816221349-6487              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:32 UTC | Mon, 16 Aug 2021 22:15:38 UTC |
	|         | --alsologtostderr                                 |                                        |         |         |                               |                               |
	|         | -v=1 --driver=docker                              |                                        |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                        |         |         |                               |                               |
	| delete  | -p                                                | running-upgrade-20210816221326-6487    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:52 UTC | Mon, 16 Aug 2021 22:15:55 UTC |
	|         | running-upgrade-20210816221326-6487               |                                        |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210816221528-6487    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:28 UTC | Mon, 16 Aug 2021 22:17:12 UTC |
	|         | old-k8s-version-20210816221528-6487               |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                        |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                        |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                        |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                        |         |         |                               |                               |
	|         | --keep-context=false                              |                                        |         |         |                               |                               |
	|         | --driver=docker                                   |                                        |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                        |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210816221528-6487    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:17:22 UTC | Mon, 16 Aug 2021 22:17:22 UTC |
	|         | old-k8s-version-20210816221528-6487               |                                        |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                        |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                        |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210816221528-6487    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:17:22 UTC | Mon, 16 Aug 2021 22:17:43 UTC |
	|         | old-k8s-version-20210816221528-6487               |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                        |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210816221528-6487    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:17:43 UTC | Mon, 16 Aug 2021 22:17:43 UTC |
	|         | old-k8s-version-20210816221528-6487               |                                        |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                        |         |         |                               |                               |
	| start   | -p no-preload-20210816221555-6487                 | no-preload-20210816221555-6487         | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:55 UTC | Mon, 16 Aug 2021 22:17:51 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                        |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                        |         |         |                               |                               |
	|         | --driver=docker                                   |                                        |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                        |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210816221555-6487         | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:17:59 UTC | Mon, 16 Aug 2021 22:17:59 UTC |
	|         | no-preload-20210816221555-6487                    |                                        |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                        |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                        |         |         |                               |                               |
	|---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:17:43
	Running on machine: debian-jenkins-agent-13
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:17:43.616354  213866 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:17:43.616720  213866 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:17:43.616735  213866 out.go:311] Setting ErrFile to fd 2...
	I0816 22:17:43.616741  213866 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:17:43.617002  213866 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:17:43.617583  213866 out.go:305] Setting JSON to false
	I0816 22:17:43.653275  213866 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-13","uptime":3431,"bootTime":1629148833,"procs":276,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:17:43.653357  213866 start.go:121] virtualization: kvm guest
	I0816 22:17:43.655831  213866 out.go:177] * [old-k8s-version-20210816221528-6487] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:17:43.655945  213866 notify.go:169] Checking for updates...
	I0816 22:17:43.657334  213866 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:17:43.659058  213866 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:17:43.660652  213866 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:17:43.662104  213866 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:17:43.662491  213866 config.go:177] Loaded profile config "old-k8s-version-20210816221528-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0816 22:17:43.664315  213866 out.go:177] * Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	I0816 22:17:43.664342  213866 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:17:43.712690  213866 docker.go:132] docker version: linux-19.03.15
	I0816 22:17:43.712754  213866 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:17:43.790827  213866 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:60 SystemTime:2021-08-16 22:17:43.747470712 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 22:17:43.790918  213866 docker.go:244] overlay module found
	I0816 22:17:43.793412  213866 out.go:177] * Using the docker driver based on existing profile
	I0816 22:17:43.793442  213866 start.go:278] selected driver: docker
	I0816 22:17:43.793447  213866 start.go:751] validating driver "docker" against &{Name:old-k8s-version-20210816221528-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210816221528-6487 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netwo
rk: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:17:43.793564  213866 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0816 22:17:43.793605  213866 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:17:43.793624  213866 out.go:242] ! Your cgroup does not allow setting memory.
	I0816 22:17:43.794910  213866 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:17:43.795685  213866 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:17:43.872903  213866 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:60 SystemTime:2021-08-16 22:17:43.831279795 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0816 22:17:43.873015  213866 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:17:43.873038  213866 out.go:242] ! Your cgroup does not allow setting memory.
	I0816 22:17:43.874793  213866 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:17:43.874884  213866 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 22:17:43.874906  213866 cni.go:93] Creating CNI manager for ""
	I0816 22:17:43.874921  213866 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:17:43.874930  213866 start_flags.go:277] config:
	{Name:old-k8s-version-20210816221528-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210816221528-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:17:43.876771  213866 out.go:177] * Starting control plane node old-k8s-version-20210816221528-6487 in cluster old-k8s-version-20210816221528-6487
	I0816 22:17:43.876800  213866 cache.go:117] Beginning downloading kic base image for docker with crio
	I0816 22:17:43.878153  213866 out.go:177] * Pulling base image ...
	I0816 22:17:43.878177  213866 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime crio
	I0816 22:17:43.878205  213866 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4
	I0816 22:17:43.878217  213866 cache.go:56] Caching tarball of preloaded images
	I0816 22:17:43.878286  213866 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0816 22:17:43.878362  213866 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 22:17:43.878381  213866 cache.go:59] Finished verifying existence of preloaded tar for  v1.14.0 on crio
	I0816 22:17:43.878490  213866 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210816221528-6487/config.json ...
	I0816 22:17:43.963665  213866 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0816 22:17:43.963695  213866 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0816 22:17:43.963711  213866 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:17:43.963746  213866 start.go:313] acquiring machines lock for old-k8s-version-20210816221528-6487: {Name:mk08c34ee0d606bd7df5a282d0a4b4d4f1d8c694 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:17:43.963829  213866 start.go:317] acquired machines lock for "old-k8s-version-20210816221528-6487" in 63.823µs
	I0816 22:17:43.963849  213866 start.go:93] Skipping create...Using existing machine configuration
	I0816 22:17:43.963854  213866 fix.go:55] fixHost starting: 
	I0816 22:17:43.964113  213866 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210816221528-6487 --format={{.State.Status}}
	I0816 22:17:44.004969  213866 fix.go:108] recreateIfNeeded on old-k8s-version-20210816221528-6487: state=Stopped err=<nil>
	W0816 22:17:44.004997  213866 fix.go:134] unexpected machine state, will restart: <nil>
	I0816 22:17:40.492706  198075 cli_runner.go:115] Run: docker container inspect cert-options-20210816221525-6487 --format={{.State.Status}}
	I0816 22:17:43.535475  198075 cli_runner.go:115] Run: docker container inspect cert-options-20210816221525-6487 --format={{.State.Status}}
	I0816 22:17:42.027713  205051 pod_ready.go:102] pod "coredns-78fcd69978-bwf24" in "kube-system" namespace has status "Ready":"False"
	I0816 22:17:44.028108  205051 pod_ready.go:102] pod "coredns-78fcd69978-bwf24" in "kube-system" namespace has status "Ready":"False"
	I0816 22:17:44.007299  213866 out.go:177] * Restarting existing docker container for "old-k8s-version-20210816221528-6487" ...
	I0816 22:17:44.007359  213866 cli_runner.go:115] Run: docker start old-k8s-version-20210816221528-6487
	I0816 22:17:44.559288  213866 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210816221528-6487 --format={{.State.Status}}
	I0816 22:17:44.600720  213866 kic.go:420] container "old-k8s-version-20210816221528-6487" state is running.
	I0816 22:17:44.601236  213866 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210816221528-6487
	I0816 22:17:44.640705  213866 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210816221528-6487/config.json ...
	I0816 22:17:44.640912  213866 machine.go:88] provisioning docker machine ...
	I0816 22:17:44.640934  213866 ubuntu.go:169] provisioning hostname "old-k8s-version-20210816221528-6487"
	I0816 22:17:44.640989  213866 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210816221528-6487
	I0816 22:17:44.681924  213866 main.go:130] libmachine: Using SSH client type: native
	I0816 22:17:44.682164  213866 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32929 <nil> <nil>}
	I0816 22:17:44.682191  213866 main.go:130] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20210816221528-6487 && echo "old-k8s-version-20210816221528-6487" | sudo tee /etc/hostname
	I0816 22:17:44.682766  213866 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57080->127.0.0.1:32929: read: connection reset by peer
	I0816 22:17:47.815315  213866 main.go:130] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20210816221528-6487
	
	I0816 22:17:47.815386  213866 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210816221528-6487
	I0816 22:17:47.856232  213866 main.go:130] libmachine: Using SSH client type: native
	I0816 22:17:47.856429  213866 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32929 <nil> <nil>}
	I0816 22:17:47.856451  213866 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20210816221528-6487' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20210816221528-6487/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20210816221528-6487' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:17:47.979521  213866 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:17:47.979552  213866 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem ServerCertRemotePath:/e
tc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:17:47.979574  213866 ubuntu.go:177] setting up certificates
	I0816 22:17:47.979581  213866 provision.go:83] configureAuth start
	I0816 22:17:47.979625  213866 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210816221528-6487
	I0816 22:17:48.021285  213866 provision.go:138] copyHostCerts
	I0816 22:17:48.021338  213866 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:17:48.021345  213866 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:17:48.021398  213866 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0816 22:17:48.021469  213866 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:17:48.021503  213866 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:17:48.021529  213866 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:17:48.021585  213866 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:17:48.021593  213866 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:17:48.021620  213866 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:17:48.021660  213866 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20210816221528-6487 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20210816221528-6487]
	I0816 22:17:48.223459  213866 provision.go:172] copyRemoteCerts
	I0816 22:17:48.223515  213866 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:17:48.223548  213866 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210816221528-6487
	I0816 22:17:48.266437  213866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32929 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816221528-6487/id_rsa Username:docker}
	I0816 22:17:48.376189  213866 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:17:48.392867  213866 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0816 22:17:48.409103  213866 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 22:17:48.429475  213866 provision.go:86] duration metric: configureAuth took 449.882056ms
	I0816 22:17:48.429514  213866 ubuntu.go:193] setting minikube options for container-runtime
	I0816 22:17:48.429662  213866 config.go:177] Loaded profile config "old-k8s-version-20210816221528-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0816 22:17:48.429788  213866 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210816221528-6487
	I0816 22:17:48.474555  213866 main.go:130] libmachine: Using SSH client type: native
	I0816 22:17:48.474750  213866 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32929 <nil> <nil>}
	I0816 22:17:48.474774  213866 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 22:17:48.938606  213866 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 22:17:48.938642  213866 machine.go:91] provisioned docker machine in 4.29771469s
	I0816 22:17:48.938655  213866 start.go:267] post-start starting for "old-k8s-version-20210816221528-6487" (driver="docker")
	I0816 22:17:48.938664  213866 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:17:48.938723  213866 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:17:48.938776  213866 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210816221528-6487
	I0816 22:17:48.979027  213866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32929 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816221528-6487/id_rsa Username:docker}
	I0816 22:17:49.066888  213866 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:17:49.069539  213866 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0816 22:17:49.069561  213866 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0816 22:17:49.069569  213866 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0816 22:17:49.069574  213866 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0816 22:17:49.069582  213866 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:17:49.069630  213866 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:17:49.069715  213866 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem -> 64872.pem in /etc/ssl/certs
	I0816 22:17:49.069827  213866 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:17:49.076109  213866 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:17:49.091557  213866 start.go:270] post-start completed in 152.889579ms
	I0816 22:17:49.091610  213866 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 22:17:49.091639  213866 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210816221528-6487
	I0816 22:17:49.130797  213866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32929 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816221528-6487/id_rsa Username:docker}
	I0816 22:17:49.215935  213866 fix.go:57] fixHost completed within 5.252074609s
	I0816 22:17:49.215963  213866 start.go:80] releasing machines lock for "old-k8s-version-20210816221528-6487", held for 5.252122452s
	I0816 22:17:49.216055  213866 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210816221528-6487
	I0816 22:17:49.255888  213866 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:17:49.255951  213866 ssh_runner.go:149] Run: systemctl --version
	I0816 22:17:49.256000  213866 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210816221528-6487
	I0816 22:17:49.256000  213866 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210816221528-6487
	I0816 22:17:49.301071  213866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32929 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816221528-6487/id_rsa Username:docker}
	I0816 22:17:49.301993  213866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32929 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816221528-6487/id_rsa Username:docker}
	I0816 22:17:49.387656  213866 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0816 22:17:49.422998  213866 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0816 22:17:49.431607  213866 docker.go:153] disabling docker service ...
	I0816 22:17:49.431652  213866 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:17:49.439856  213866 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:17:49.447963  213866 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:17:49.506461  213866 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:17:49.571983  213866 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:17:49.582342  213866 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:17:49.594274  213866 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.1"|' -i /etc/crio/crio.conf"
	I0816 22:17:49.602090  213866 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0816 22:17:49.602115  213866 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0816 22:17:49.609318  213866 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:17:49.614872  213866 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:17:49.614910  213866 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:17:49.621458  213866 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:17:49.627386  213866 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:17:49.688953  213866 ssh_runner.go:149] Run: sudo systemctl start crio
	I0816 22:17:49.697790  213866 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 22:17:49.697845  213866 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:17:49.700685  213866 start.go:413] Will wait 60s for crictl version
	I0816 22:17:49.700722  213866 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:17:49.727846  213866 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0816 22:17:49.727942  213866 ssh_runner.go:149] Run: crio --version
	I0816 22:17:49.787085  213866 ssh_runner.go:149] Run: crio --version
	I0816 22:17:46.576280  198075 cli_runner.go:115] Run: docker container inspect cert-options-20210816221525-6487 --format={{.State.Status}}
	I0816 22:17:49.617131  198075 cli_runner.go:115] Run: docker container inspect cert-options-20210816221525-6487 --format={{.State.Status}}
	I0816 22:17:46.527328  205051 pod_ready.go:102] pod "coredns-78fcd69978-bwf24" in "kube-system" namespace has status "Ready":"False"
	I0816 22:17:48.528120  205051 pod_ready.go:102] pod "coredns-78fcd69978-bwf24" in "kube-system" namespace has status "Ready":"False"
	I0816 22:17:50.027549  205051 pod_ready.go:92] pod "coredns-78fcd69978-bwf24" in "kube-system" namespace has status "Ready":"True"
	I0816 22:17:50.027574  205051 pod_ready.go:81] duration metric: took 25.021296163s waiting for pod "coredns-78fcd69978-bwf24" in "kube-system" namespace to be "Ready" ...
	I0816 22:17:50.027583  205051 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-m4p5q" in "kube-system" namespace to be "Ready" ...
	I0816 22:17:50.029484  205051 pod_ready.go:97] error getting pod "coredns-78fcd69978-m4p5q" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-m4p5q" not found
	I0816 22:17:50.029504  205051 pod_ready.go:81] duration metric: took 1.916123ms waiting for pod "coredns-78fcd69978-m4p5q" in "kube-system" namespace to be "Ready" ...
	E0816 22:17:50.029512  205051 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-78fcd69978-m4p5q" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-m4p5q" not found
	I0816 22:17:50.029518  205051 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20210816221555-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:17:50.033430  205051 pod_ready.go:92] pod "etcd-no-preload-20210816221555-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:17:50.033445  205051 pod_ready.go:81] duration metric: took 3.920385ms waiting for pod "etcd-no-preload-20210816221555-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:17:50.033456  205051 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20210816221555-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:17:50.037214  205051 pod_ready.go:92] pod "kube-apiserver-no-preload-20210816221555-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:17:50.037230  205051 pod_ready.go:81] duration metric: took 3.767612ms waiting for pod "kube-apiserver-no-preload-20210816221555-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:17:50.037238  205051 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20210816221555-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:17:50.041124  205051 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210816221555-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:17:50.041144  205051 pod_ready.go:81] duration metric: took 3.899178ms waiting for pod "kube-controller-manager-no-preload-20210816221555-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:17:50.041155  205051 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s5fvs" in "kube-system" namespace to be "Ready" ...
	I0816 22:17:50.225505  205051 pod_ready.go:92] pod "kube-proxy-s5fvs" in "kube-system" namespace has status "Ready":"True"
	I0816 22:17:50.225525  205051 pod_ready.go:81] duration metric: took 184.361713ms waiting for pod "kube-proxy-s5fvs" in "kube-system" namespace to be "Ready" ...
	I0816 22:17:50.225536  205051 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20210816221555-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:17:50.626248  205051 pod_ready.go:92] pod "kube-scheduler-no-preload-20210816221555-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:17:50.626266  205051 pod_ready.go:81] duration metric: took 400.721307ms waiting for pod "kube-scheduler-no-preload-20210816221555-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:17:50.626273  205051 pod_ready.go:38] duration metric: took 25.636600986s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:17:50.626292  205051 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:17:50.626330  205051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:17:50.647297  205051 api_server.go:70] duration metric: took 25.736814648s to wait for apiserver process to appear ...
	I0816 22:17:50.647320  205051 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:17:50.647328  205051 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:17:50.651244  205051 api_server.go:265] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0816 22:17:50.651996  205051 api_server.go:139] control plane version: v1.22.0-rc.0
	I0816 22:17:50.652017  205051 api_server.go:129] duration metric: took 4.691835ms to wait for apiserver health ...
	I0816 22:17:50.652026  205051 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:17:50.827797  205051 system_pods.go:59] 8 kube-system pods found
	I0816 22:17:50.827830  205051 system_pods.go:61] "coredns-78fcd69978-bwf24" [f54f877c-ec78-4c85-9791-63bbe3c69a29] Running
	I0816 22:17:50.827838  205051 system_pods.go:61] "etcd-no-preload-20210816221555-6487" [60ab6256-efe0-4b9e-a595-f21b8cb27292] Running
	I0816 22:17:50.827845  205051 system_pods.go:61] "kindnet-64mk6" [9457afdb-6c1b-4493-b14e-1e0ab02928a0] Running
	I0816 22:17:50.827854  205051 system_pods.go:61] "kube-apiserver-no-preload-20210816221555-6487" [130512c9-eb78-4e10-accf-2aed4029c8f3] Running
	I0816 22:17:50.827859  205051 system_pods.go:61] "kube-controller-manager-no-preload-20210816221555-6487" [72d71b26-e847-4ec0-8dd7-56227dd55ae6] Running
	I0816 22:17:50.827862  205051 system_pods.go:61] "kube-proxy-s5fvs" [c490d3c1-69f8-433d-82f1-f00418e05c0b] Running
	I0816 22:17:50.827866  205051 system_pods.go:61] "kube-scheduler-no-preload-20210816221555-6487" [54906644-5059-4dad-8490-c1c1b2cbcf3d] Running
	I0816 22:17:50.827870  205051 system_pods.go:61] "storage-provisioner" [9dad9434-bb24-4619-8a15-29884f5c18d9] Running
	I0816 22:17:50.827875  205051 system_pods.go:74] duration metric: took 175.843836ms to wait for pod list to return data ...
	I0816 22:17:50.827889  205051 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:17:51.025905  205051 default_sa.go:45] found service account: "default"
	I0816 22:17:51.025927  205051 default_sa.go:55] duration metric: took 198.033159ms for default service account to be created ...
	I0816 22:17:51.025934  205051 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:17:51.227532  205051 system_pods.go:86] 8 kube-system pods found
	I0816 22:17:51.227559  205051 system_pods.go:89] "coredns-78fcd69978-bwf24" [f54f877c-ec78-4c85-9791-63bbe3c69a29] Running
	I0816 22:17:51.227567  205051 system_pods.go:89] "etcd-no-preload-20210816221555-6487" [60ab6256-efe0-4b9e-a595-f21b8cb27292] Running
	I0816 22:17:51.227571  205051 system_pods.go:89] "kindnet-64mk6" [9457afdb-6c1b-4493-b14e-1e0ab02928a0] Running
	I0816 22:17:51.227575  205051 system_pods.go:89] "kube-apiserver-no-preload-20210816221555-6487" [130512c9-eb78-4e10-accf-2aed4029c8f3] Running
	I0816 22:17:51.227580  205051 system_pods.go:89] "kube-controller-manager-no-preload-20210816221555-6487" [72d71b26-e847-4ec0-8dd7-56227dd55ae6] Running
	I0816 22:17:51.227584  205051 system_pods.go:89] "kube-proxy-s5fvs" [c490d3c1-69f8-433d-82f1-f00418e05c0b] Running
	I0816 22:17:51.227587  205051 system_pods.go:89] "kube-scheduler-no-preload-20210816221555-6487" [54906644-5059-4dad-8490-c1c1b2cbcf3d] Running
	I0816 22:17:51.227591  205051 system_pods.go:89] "storage-provisioner" [9dad9434-bb24-4619-8a15-29884f5c18d9] Running
	I0816 22:17:51.227597  205051 system_pods.go:126] duration metric: took 201.658132ms to wait for k8s-apps to be running ...
	I0816 22:17:51.227617  205051 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:17:51.227656  205051 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:17:51.237006  205051 system_svc.go:56] duration metric: took 9.381571ms WaitForService to wait for kubelet.
	I0816 22:17:51.237030  205051 kubeadm.go:547] duration metric: took 26.326552519s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:17:51.237058  205051 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:17:51.427089  205051 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:17:51.427114  205051 node_conditions.go:123] node cpu capacity is 8
	I0816 22:17:51.427127  205051 node_conditions.go:105] duration metric: took 190.064088ms to run NodePressure ...
	I0816 22:17:51.427140  205051 start.go:231] waiting for startup goroutines ...
	I0816 22:17:51.471143  205051 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0816 22:17:51.473437  205051 out.go:177] 
	W0816 22:17:51.473581  205051 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0816 22:17:51.475237  205051 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0816 22:17:51.476813  205051 out.go:177] * Done! kubectl is now configured to use "no-preload-20210816221555-6487" cluster and "default" namespace by default
	I0816 22:17:49.848797  213866 out.go:177] * Preparing Kubernetes v1.14.0 on CRI-O 1.20.3 ...
	I0816 22:17:49.848874  213866 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210816221528-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 22:17:49.886531  213866 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0816 22:17:49.890036  213866 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:17:49.899171  213866 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime crio
	I0816 22:17:49.899234  213866 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:17:49.926348  213866 crio.go:424] all images are preloaded for cri-o runtime.
	I0816 22:17:49.926368  213866 crio.go:333] Images already preloaded, skipping extraction
	I0816 22:17:49.926409  213866 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:17:49.947443  213866 crio.go:424] all images are preloaded for cri-o runtime.
	I0816 22:17:49.947464  213866 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:17:49.947526  213866 ssh_runner.go:149] Run: crio config
	I0816 22:17:50.012196  213866 cni.go:93] Creating CNI manager for ""
	I0816 22:17:50.012231  213866 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:17:50.012242  213866 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0816 22:17:50.012258  213866 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.14.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20210816221528-6487 NodeName:old-k8s-version-20210816221528-6487 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCA
File:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:17:50.012405  213866 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-20210816221528-6487"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20210816221528-6487
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.58.2:2381
	kubernetesVersion: v1.14.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:17:50.012513  213866 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.14.0/kubelet --allow-privileged=true --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --client-ca-file=/var/lib/minikube/certs/ca.crt --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=old-k8s-version-20210816221528-6487 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210816221528-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0816 22:17:50.012564  213866 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.14.0
	I0816 22:17:50.018948  213866 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:17:50.019006  213866 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:17:50.025165  213866 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (638 bytes)
	I0816 22:17:50.039026  213866 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 22:17:50.051176  213866 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2145 bytes)
	I0816 22:17:50.062641  213866 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0816 22:17:50.065361  213866 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:17:50.073542  213866 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210816221528-6487 for IP: 192.168.58.2
	I0816 22:17:50.073584  213866 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:17:50.073598  213866 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:17:50.073642  213866 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210816221528-6487/client.key
	I0816 22:17:50.073659  213866 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210816221528-6487/apiserver.key.cee25041
	I0816 22:17:50.073673  213866 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210816221528-6487/proxy-client.key
	I0816 22:17:50.073758  213866 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem (1338 bytes)
	W0816 22:17:50.073799  213866 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487_empty.pem, impossibly tiny 0 bytes
	I0816 22:17:50.073812  213866 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 22:17:50.073835  213866 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:17:50.073858  213866 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:17:50.073885  213866 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0816 22:17:50.073933  213866 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:17:50.074893  213866 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210816221528-6487/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:17:50.090215  213866 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210816221528-6487/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 22:17:50.105358  213866 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210816221528-6487/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:17:50.120518  213866 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210816221528-6487/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 22:17:50.135543  213866 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:17:50.150708  213866 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 22:17:50.166210  213866 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:17:50.181287  213866 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 22:17:50.196612  213866 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem --> /usr/share/ca-certificates/6487.pem (1338 bytes)
	I0816 22:17:50.211580  213866 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /usr/share/ca-certificates/64872.pem (1708 bytes)
	I0816 22:17:50.227086  213866 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:17:50.242145  213866 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:17:50.253139  213866 ssh_runner.go:149] Run: openssl version
	I0816 22:17:50.257644  213866 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/64872.pem && ln -fs /usr/share/ca-certificates/64872.pem /etc/ssl/certs/64872.pem"
	I0816 22:17:50.264527  213866 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/64872.pem
	I0816 22:17:50.267588  213866 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:50 /usr/share/ca-certificates/64872.pem
	I0816 22:17:50.267633  213866 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/64872.pem
	I0816 22:17:50.272099  213866 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/64872.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:17:50.278419  213866 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:17:50.285148  213866 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:17:50.287880  213866 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:17:50.287935  213866 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:17:50.292231  213866 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:17:50.298062  213866 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6487.pem && ln -fs /usr/share/ca-certificates/6487.pem /etc/ssl/certs/6487.pem"
	I0816 22:17:50.304531  213866 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6487.pem
	I0816 22:17:50.307273  213866 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:50 /usr/share/ca-certificates/6487.pem
	I0816 22:17:50.307307  213866 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6487.pem
	I0816 22:17:50.311557  213866 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6487.pem /etc/ssl/certs/51391683.0"
	I0816 22:17:50.317560  213866 kubeadm.go:390] StartCluster: {Name:old-k8s-version-20210816221528-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210816221528-6487 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequest
ed:false ExtraDisks:0}
	I0816 22:17:50.317652  213866 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 22:17:50.317693  213866 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:17:50.339759  213866 cri.go:76] found id: ""
	I0816 22:17:50.339810  213866 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:17:50.345841  213866 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0816 22:17:50.345857  213866 kubeadm.go:600] restartCluster start
	I0816 22:17:50.345898  213866 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0816 22:17:50.351681  213866 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:17:50.352532  213866 kubeconfig.go:117] verify returned: extract IP: "old-k8s-version-20210816221528-6487" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:17:50.352851  213866 kubeconfig.go:128] "old-k8s-version-20210816221528-6487" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0816 22:17:50.353440  213866 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:17:50.355596  213866 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 22:17:50.361479  213866 api_server.go:164] Checking apiserver status ...
	I0816 22:17:50.361531  213866 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:17:50.372943  213866 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:17:50.573307  213866 api_server.go:164] Checking apiserver status ...
	I0816 22:17:50.573368  213866 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:17:50.586378  213866 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:17:50.773650  213866 api_server.go:164] Checking apiserver status ...
	I0816 22:17:50.773731  213866 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:17:50.786871  213866 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:17:50.973070  213866 api_server.go:164] Checking apiserver status ...
	I0816 22:17:50.973169  213866 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:17:50.986811  213866 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:17:51.173010  213866 api_server.go:164] Checking apiserver status ...
	I0816 22:17:51.173084  213866 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:17:51.186702  213866 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:17:51.373978  213866 api_server.go:164] Checking apiserver status ...
	I0816 22:17:51.374048  213866 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:17:51.388180  213866 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:17:51.573419  213866 api_server.go:164] Checking apiserver status ...
	I0816 22:17:51.573491  213866 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:17:51.589046  213866 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:17:51.773186  213866 api_server.go:164] Checking apiserver status ...
	I0816 22:17:51.773252  213866 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:17:51.786560  213866 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:17:51.973818  213866 api_server.go:164] Checking apiserver status ...
	I0816 22:17:51.973890  213866 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:17:51.987091  213866 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:17:52.173305  213866 api_server.go:164] Checking apiserver status ...
	I0816 22:17:52.173374  213866 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:17:52.186496  213866 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:17:52.373723  213866 api_server.go:164] Checking apiserver status ...
	I0816 22:17:52.373784  213866 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:17:52.387054  213866 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:17:52.573319  213866 api_server.go:164] Checking apiserver status ...
	I0816 22:17:52.573388  213866 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:17:52.586870  213866 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:17:52.773125  213866 api_server.go:164] Checking apiserver status ...
	I0816 22:17:52.773195  213866 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:17:52.786615  213866 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:17:52.973731  213866 api_server.go:164] Checking apiserver status ...
	I0816 22:17:52.973815  213866 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:17:52.987034  213866 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:17:53.173051  213866 api_server.go:164] Checking apiserver status ...
	I0816 22:17:53.173110  213866 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:17:53.187794  213866 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:17:53.374006  213866 api_server.go:164] Checking apiserver status ...
	I0816 22:17:53.374064  213866 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:17:53.387078  213866 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:17:53.387096  213866 api_server.go:164] Checking apiserver status ...
	I0816 22:17:53.387134  213866 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:17:53.398834  213866 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:17:53.398855  213866 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0816 22:17:53.398861  213866 kubeadm.go:1032] stopping kube-system containers ...
	I0816 22:17:53.398869  213866 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 22:17:53.398911  213866 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:17:53.433512  213866 cri.go:76] found id: ""
	I0816 22:17:53.433569  213866 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0816 22:17:53.442634  213866 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:17:53.449151  213866 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5743 Aug 16 22:15 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5779 Aug 16 22:15 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5927 Aug 16 22:15 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5731 Aug 16 22:15 /etc/kubernetes/scheduler.conf
	
	I0816 22:17:53.449203  213866 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 22:17:53.456041  213866 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 22:17:53.462234  213866 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 22:17:53.468844  213866 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 22:17:53.475203  213866 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:17:53.481733  213866 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0816 22:17:53.481752  213866 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:17:52.664180  198075 cli_runner.go:115] Run: docker container inspect cert-options-20210816221525-6487 --format={{.State.Status}}
	I0816 22:17:53.910197  213866 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:17:54.724435  213866 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:17:54.823447  213866 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:17:54.860790  213866 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:17:54.917466  213866 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:17:54.917529  213866 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:17:55.431977  213866 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:17:55.931956  213866 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:17:56.017241  213866 api_server.go:70] duration metric: took 1.099774655s to wait for apiserver process to appear ...
	I0816 22:17:56.017280  213866 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:17:56.017292  213866 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0816 22:17:56.017669  213866 api_server.go:255] stopped: https://192.168.58.2:8443/healthz: Get "https://192.168.58.2:8443/healthz": dial tcp 192.168.58.2:8443: connect: connection refused
	I0816 22:17:56.518392  213866 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0816 22:17:55.704452  198075 cli_runner.go:115] Run: docker container inspect cert-options-20210816221525-6487 --format={{.State.Status}}
	I0816 22:17:58.757878  198075 cli_runner.go:115] Run: docker container inspect cert-options-20210816221525-6487 --format={{.State.Status}}
	I0816 22:18:00.072771  213866 api_server.go:265] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:18:00.072815  213866 api_server.go:101] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:18:00.518454  213866 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0816 22:18:00.523313  213866 api_server.go:265] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0816 22:18:00.523339  213866 api_server.go:101] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0816 22:18:01.017799  213866 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0816 22:18:01.024331  213866 api_server.go:265] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0816 22:18:01.024363  213866 api_server.go:101] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0816 22:18:01.517789  213866 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0816 22:18:01.522331  213866 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0816 22:18:01.528263  213866 api_server.go:139] control plane version: v1.14.0
	I0816 22:18:01.528281  213866 api_server.go:129] duration metric: took 5.510995732s to wait for apiserver health ...
	I0816 22:18:01.528290  213866 cni.go:93] Creating CNI manager for ""
	I0816 22:18:01.528296  213866 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:18:01.530548  213866 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 22:18:01.530596  213866 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0816 22:18:01.534145  213866 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.14.0/kubectl ...
	I0816 22:18:01.534165  213866 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0816 22:18:01.546651  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 22:18:01.741143  213866 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:18:01.748495  213866 system_pods.go:59] 8 kube-system pods found
	I0816 22:18:01.748523  213866 system_pods.go:61] "coredns-fb8b8dccf-sw9nf" [9153dae9-fedf-11eb-bd03-0242afc9b4e0] Running
	I0816 22:18:01.748528  213866 system_pods.go:61] "etcd-old-k8s-version-20210816221528-6487" [ad9fff4c-fedf-11eb-bd03-0242afc9b4e0] Running
	I0816 22:18:01.748532  213866 system_pods.go:61] "kindnet-hcrgv" [914a6d55-fedf-11eb-bd03-0242afc9b4e0] Running
	I0816 22:18:01.748536  213866 system_pods.go:61] "kube-apiserver-old-k8s-version-20210816221528-6487" [a8db90aa-fedf-11eb-bd03-0242afc9b4e0] Running
	I0816 22:18:01.748540  213866 system_pods.go:61] "kube-controller-manager-old-k8s-version-20210816221528-6487" [d10a41b7-fedf-11eb-bb14-0242c0a83a02] Running
	I0816 22:18:01.748543  213866 system_pods.go:61] "kube-proxy-k97nz" [914ae034-fedf-11eb-bd03-0242afc9b4e0] Running
	I0816 22:18:01.748547  213866 system_pods.go:61] "kube-scheduler-old-k8s-version-20210816221528-6487" [ad077829-fedf-11eb-bd03-0242afc9b4e0] Running
	I0816 22:18:01.748573  213866 system_pods.go:61] "storage-provisioner" [9264ee62-fedf-11eb-bd03-0242afc9b4e0] Running
	I0816 22:18:01.748578  213866 system_pods.go:74] duration metric: took 7.416219ms to wait for pod list to return data ...
	I0816 22:18:01.748584  213866 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:18:01.752558  213866 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:18:01.752581  213866 node_conditions.go:123] node cpu capacity is 8
	I0816 22:18:01.752592  213866 node_conditions.go:105] duration metric: took 4.00183ms to run NodePressure ...
	I0816 22:18:01.752609  213866 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:18:01.881917  213866 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0816 22:18:01.884739  213866 retry.go:31] will retry after 276.165072ms: kubelet not initialised
	I0816 22:18:02.165458  213866 retry.go:31] will retry after 540.190908ms: kubelet not initialised
	I0816 22:18:02.709081  213866 retry.go:31] will retry after 655.06503ms: kubelet not initialised
	I0816 22:18:01.807677  198075 cli_runner.go:115] Run: docker container inspect cert-options-20210816221525-6487 --format={{.State.Status}}
	I0816 22:18:04.856684  198075 cli_runner.go:115] Run: docker container inspect cert-options-20210816221525-6487 --format={{.State.Status}}
	I0816 22:18:03.722308  213866 retry.go:31] will retry after 791.196345ms: kubelet not initialised
	I0816 22:18:04.518909  213866 retry.go:31] will retry after 1.170244332s: kubelet not initialised
	I0816 22:18:05.692711  213866 retry.go:31] will retry after 2.253109428s: kubelet not initialised
	I0816 22:18:07.953596  213866 retry.go:31] will retry after 1.610739793s: kubelet not initialised
	I0816 22:18:07.898307  198075 cli_runner.go:115] Run: docker container inspect cert-options-20210816221525-6487 --format={{.State.Status}}
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Mon 2021-08-16 22:13:51 UTC, end at Mon 2021-08-16 22:18:10 UTC. --
	Aug 16 22:15:33 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:33.997181209Z" level=info msg="Node configuration value for systemd CollectMode is true"
	Aug 16 22:15:33 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:33.998911132Z" level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.001615431Z" level=info msg="Conmon does support the --sync option"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.001679089Z" level=info msg="No seccomp profile specified, using the internal default"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.001686289Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.006618470Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.009192800Z" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.011666290Z" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.023071034Z" level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.023093501Z" level=warning msg="Default CNI network name kindnet is unchangeable"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.335777529Z" level=info msg="Got pod network &{Name:coredns-558bd4d5db-7wcqt Namespace:kube-system ID:ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7 NetNS:/var/run/netns/72967a55-409d-4e22-a50b-fe735e218d4f Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.336029066Z" level=info msg="About to check CNI network kindnet (type=ptp)"
	Aug 16 22:15:34 pause-20210816221349-6487 systemd[1]: Started Container Runtime Interface for OCI (CRI-O).
	Aug 16 22:15:37 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:37.869390202Z" level=info msg="Running pod sandbox: kube-system/storage-provisioner/POD" id=f027cbdd-a110-4c73-bd17-ec2c625617e8 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.008166152Z" level=info msg="Ran pod sandbox 4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1 with infra container: kube-system/storage-provisioner/POD" id=f027cbdd-a110-4c73-bd17-ec2c625617e8 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.009539171Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3ba0b87b-369b-437f-8093-e60bb62a6e5f name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.010209695Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=3ba0b87b-369b-437f-8093-e60bb62a6e5f name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.010864154Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6a8cdcd8-6743-458d-a5bd-6721cf6293d6 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.011418773Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=6a8cdcd8-6743-458d-a5bd-6721cf6293d6 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.012207175Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b3c53444-4fd4-4816-941c-ae75f44b9aab name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.023341263Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/6f094131e65ab0dac08bd38816173827064bbf3fa41e0cb5285e07440d34effc/merged/etc/passwd: no such file or directory"
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.023470306Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/6f094131e65ab0dac08bd38816173827064bbf3fa41e0cb5285e07440d34effc/merged/etc/group: no such file or directory"
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.144183330Z" level=info msg="Created container 2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee: kube-system/storage-provisioner/storage-provisioner" id=b3c53444-4fd4-4816-941c-ae75f44b9aab name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.144745336Z" level=info msg="Starting container: 2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee" id=b7210b75-09f3-4fd5-8d69-e9dcc6142d7a name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.155275298Z" level=info msg="Started container 2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee: kube-system/storage-provisioner/storage-provisioner" id=b7210b75-09f3-4fd5-8d69-e9dcc6142d7a name=/runtime.v1alpha2.RuntimeService/StartContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	2bd1364ac865c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   2 minutes ago       Running             storage-provisioner       0                   4e41a3650a65f
	a3847cf5a7a0a       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   2 minutes ago       Running             coredns                   0                   ace8d49de7551
	ba2e9dd72df01       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb   3 minutes ago       Running             kindnet-cni               0                   d5d9684c84cea
	1b3d3880e345b       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   3 minutes ago       Running             kube-proxy                0                   4a42049e95348
	1b4dd675dc4bc       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   3 minutes ago       Running             etcd                      0                   c159e3fd639d7
	8a5626e3acb8d       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   3 minutes ago       Running             kube-controller-manager   0                   5f516d619d78c
	e812d329ba697       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   3 minutes ago       Running             kube-apiserver            0                   a055e0d3dc6de
	a65e43c156f4f       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   3 minutes ago       Running             kube-scheduler            0                   97b975cd86e3b
	
	* 
	* ==> coredns [a3847cf5a7a0a50db95228eb59c9b2f0fd2f919f5d3ca5657533960a22d80eeb] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +4.255720] net_ratelimit: 1 callbacks suppressed
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4ed2783b447d
	[  +0.000003] ll header: 00000000: 02 42 d5 d5 90 49 02 42 c0 a8 3a 02 08 00        .B...I.B..:...
	[  +0.003943] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4ed2783b447d
	[  +0.000002] ll header: 00000000: 02 42 d5 d5 90 49 02 42 c0 a8 3a 02 08 00        .B...I.B..:...
	[  +0.000057] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4ed2783b447d
	[  +0.000002] ll header: 00000000: 02 42 d5 d5 90 49 02 42 c0 a8 3a 02 08 00        .B...I.B..:...
	[  +8.187375] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4ed2783b447d
	[  +0.000002] ll header: 00000000: 02 42 d5 d5 90 49 02 42 c0 a8 3a 02 08 00        .B...I.B..:...
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4ed2783b447d
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4ed2783b447d
	[  +0.000001] ll header: 00000000: 02 42 d5 d5 90 49 02 42 c0 a8 3a 02 08 00        .B...I.B..:...
	[  +0.000002] ll header: 00000000: 02 42 d5 d5 90 49 02 42 c0 a8 3a 02 08 00        .B...I.B..:...
	[  +1.048379] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev vethb975c587
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff ae e9 bc 5d c0 ce 08 06        .........]....
	[  +0.000537] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth6258e918
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1a 52 65 b5 f7 2a 08 06        .......Re..*..
	[  +4.928013] cgroup: cgroup2: unknown option "nsdelegate"
	[ +20.434413] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth77a7f862
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff a6 7f 02 c1 0b 7c 08 06        ...........|..
	[  +0.312009] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev vethf2148e09
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 62 56 15 39 44 18 08 06        ......bV.9D...
	[  +0.299903] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev veth00213cf6
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff da 76 97 cb ee 26 08 06        .......v...&..
	[  +2.187341] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe] <==
	* 2021-08-16 22:14:21.479302 W | etcdserver: read-only range request "key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" " with result "range_response_count:0 size:4" took too long (4.338058844s) to execute
	2021-08-16 22:14:21.479372 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-pause-20210816221349-6487\" " with result "range_response_count:0 size:4" took too long (5.261433612s) to execute
	2021-08-16 22:14:21.479441 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:0 size:4" took too long (5.23712455s) to execute
	2021-08-16 22:14:21.479474 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:0 size:4" took too long (5.175788463s) to execute
	2021-08-16 22:14:21.479613 W | etcdserver: read-only range request "key:\"/registry/minions/pause-20210816221349-6487\" " with result "range_response_count:1 size:4444" took too long (5.261587425s) to execute
	2021-08-16 22:14:21.479683 W | etcdserver: read-only range request "key:\"/registry/priorityclasses/system-node-critical\" " with result "range_response_count:0 size:4" took too long (4.338074095s) to execute
	2021-08-16 22:14:21.479744 W | etcdserver: read-only range request "key:\"/registry/ranges/servicenodeports\" " with result "range_response_count:0 size:4" took too long (5.263168271s) to execute
	2021-08-16 22:14:23.363332 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.000022778s) to execute
	2021-08-16 22:14:23.693334 W | wal: sync duration of 2.214752234s, expected less than 1s
	2021-08-16 22:14:23.700535 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:0 size:4" took too long (2.770548489s) to execute
	2021-08-16 22:14:23.700621 W | etcdserver: read-only range request "key:\"/registry/certificatesigningrequests/csr-76sbr\" " with result "range_response_count:1 size:916" took too long (4.511679407s) to execute
	2021-08-16 22:14:23.707488 W | etcdserver: read-only range request "key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" " with result "range_response_count:0 size:4" took too long (2.220681713s) to execute
	2021-08-16 22:14:23.707931 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-pause-20210816221349-6487\" " with result "range_response_count:0 size:4" took too long (2.217648018s) to execute
	2021-08-16 22:14:23.708160 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:0 size:4" took too long (2.166841468s) to execute
	2021-08-16 22:14:23.708359 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" " with result "range_response_count:0 size:4" took too long (2.218216556s) to execute
	2021-08-16 22:14:23.708455 W | etcdserver: read-only range request "key:\"/registry/resourcequotas/kube-system/\" range_end:\"/registry/resourcequotas/kube-system0\" " with result "range_response_count:0 size:4" took too long (2.217851178s) to execute
	2021-08-16 22:14:24.811296 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:0 size:5" took too long (246.052034ms) to execute
	2021-08-16 22:14:24.811319 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:351" took too long (374.900233ms) to execute
	2021-08-16 22:14:24.811421 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (278.56366ms) to execute
	2021-08-16 22:14:42.629727 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:14:52.246014 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:15:02.245960 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:15:12.246500 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:15:22.245983 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:15:32.246694 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  22:19:11 up 58 min,  0 users,  load average: 2.00, 3.21, 2.28
	Linux pause-20210816221349-6487 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [e812d329ba697b1e1640c0e4b259cdaef06f8cb4498e9c761789a26059eb18ef] <==
	* W0816 22:19:03.767482       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:19:03.840908       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:19:03.987602       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:19:04.506001       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:19:04.515145       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:19:04.744701       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:19:04.774266       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:19:04.909086       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:19:04.924888       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:19:05.362911       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:19:05.403191       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:19:05.493800       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:19:06.761091       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:19:06.864030       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:19:07.982814       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:19:08.201589       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:19:08.467561       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	I0816 22:19:11.062656       1 trace.go:205] Trace[1587025069]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (16-Aug-2021 22:18:11.062) (total time: 60000ms):
	Trace[1587025069]: [1m0.000244746s] [1m0.000244746s] END
	E0816 22:19:11.062684       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
	E0816 22:19:11.062746       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0816 22:19:11.063886       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0816 22:19:11.065055       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0816 22:19:11.066262       1 trace.go:205] Trace[1309979634]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (16-Aug-2021 22:18:11.062) (total time: 60003ms):
	Trace[1309979634]: [1m0.003869829s] [1m0.003869829s] END
	
	* 
	* ==> kube-controller-manager [8a5626e3acb8d6e6512e7bc8e176d7f98cbab112b7311beb2017dc277e1b04c1] <==
	* ec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil),
Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001cecae0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001ca1928), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0003f20e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.H
ostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001cb4d60)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001ca1970)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0816 22:14:39.232974       1 shared_informer.go:247] Caches are synced for cronjob 
	I0816 22:14:39.245131       1 shared_informer.go:247] Caches are synced for resource quota 
	I0816 22:14:39.267189       1 shared_informer.go:247] Caches are synced for stateful set 
	I0816 22:14:39.270380       1 shared_informer.go:247] Caches are synced for resource quota 
	I0816 22:14:39.281695       1 shared_informer.go:247] Caches are synced for disruption 
	I0816 22:14:39.281712       1 disruption.go:371] Sending events to api server.
	I0816 22:14:39.304499       1 shared_informer.go:247] Caches are synced for attach detach 
	I0816 22:14:39.550791       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0816 22:14:39.562350       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0816 22:14:39.641950       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-hr2q5"
	I0816 22:14:39.651611       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-7wcqt"
	I0816 22:14:39.724203       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0816 22:14:39.724328       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0816 22:14:39.731288       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0816 22:14:39.736318       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-hr2q5"
	I0816 22:14:44.033494       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	E0816 22:16:18.834007       1 node_lifecycle_controller.go:1107] Error updating node pause-20210816221349-6487: rpc error: code = Unavailable desc = transport is closing
	E0816 22:17:18.835292       1 node_lifecycle_controller.go:801] Failed while getting a Node to retry updating node health. Probably Node pause-20210816221349-6487 was deleted.
	E0816 22:17:18.835317       1 node_lifecycle_controller.go:806] Update health of Node '' from Controller error: the server was unable to return a response in the time allotted, but may still be processing the request (get nodes pause-20210816221349-6487). Skipping - no pods will be evicted.
	I0816 22:17:23.835603       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: 
	E0816 22:17:57.848763       1 node_lifecycle_controller.go:1107] Error updating node pause-20210816221349-6487: Timeout: request did not complete within requested timeout context deadline exceeded
	E0816 22:18:57.849960       1 node_lifecycle_controller.go:801] Failed while getting a Node to retry updating node health. Probably Node pause-20210816221349-6487 was deleted.
	E0816 22:18:57.849980       1 node_lifecycle_controller.go:806] Update health of Node '' from Controller error: the server was unable to return a response in the time allotted, but may still be processing the request (get nodes pause-20210816221349-6487). Skipping - no pods will be evicted.
	I0816 22:19:02.850266       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: 
	
	* 
	* ==> kube-proxy [1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04] <==
	* I0816 22:14:40.215578       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0816 22:14:40.215638       1 server_others.go:140] Detected node IP 192.168.49.2
	W0816 22:14:40.215676       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0816 22:14:40.247009       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0816 22:14:40.247045       1 server_others.go:212] Using iptables Proxier.
	I0816 22:14:40.247058       1 server_others.go:219] creating dualStackProxier for iptables.
	W0816 22:14:40.247072       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0816 22:14:40.247479       1 server.go:643] Version: v1.21.3
	I0816 22:14:40.248182       1 config.go:315] Starting service config controller
	I0816 22:14:40.248255       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:14:40.248210       1 config.go:224] Starting endpoint slice config controller
	I0816 22:14:40.248339       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0816 22:14:40.250530       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0816 22:14:40.251756       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:14:40.348781       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0816 22:14:40.348804       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [a65e43c156f4fceb68516be869bf03e5d844bd50bf1c543882409696da7e44b7] <==
	* E0816 22:14:17.590366       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:14:17.693992       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:17.713305       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:17.720174       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:14:19.024758       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:14:19.034977       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:14:19.119270       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:14:19.472008       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:14:19.474924       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:14:19.492908       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:19.552344       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:19.701087       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:19.846725       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:14:20.081603       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:14:20.288654       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 22:14:20.300668       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:14:20.446407       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:20.757057       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:14:23.039661       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:14:23.059708       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:14:23.337106       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:23.637126       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:14:23.867448       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:14:24.219179       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0816 22:14:26.331536       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:13:51 UTC, end at Mon 2021-08-16 22:19:11 UTC. --
	Aug 16 22:14:50 pause-20210816221349-6487 kubelet[1598]: E0816 22:14:50.628289    1598 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0(594ee08b00034d6446168ace5475edf70bf1e13147a0dcb8b757b040060edcf5): failed to set bridge addr: could not add IP address to \"cni0\": permission denied"
	Aug 16 22:14:50 pause-20210816221349-6487 kubelet[1598]: E0816 22:14:50.628372    1598 kuberuntime_sandbox.go:68] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0(594ee08b00034d6446168ace5475edf70bf1e13147a0dcb8b757b040060edcf5): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-7wcqt"
	Aug 16 22:14:50 pause-20210816221349-6487 kubelet[1598]: E0816 22:14:50.628398    1598 kuberuntime_manager.go:790] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0(594ee08b00034d6446168ace5475edf70bf1e13147a0dcb8b757b040060edcf5): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-7wcqt"
	Aug 16 22:14:50 pause-20210816221349-6487 kubelet[1598]: E0816 22:14:50.628484    1598 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-558bd4d5db-7wcqt_kube-system(46c03ad2-6959-421b-83b5-f2f596fc6ec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-558bd4d5db-7wcqt_kube-system(46c03ad2-6959-421b-83b5-f2f596fc6ec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0(594ee08b00034d6446168ace5475edf70bf1e13147a0dcb8b757b040060edcf5): failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied\"" pod="kube-system/coredns-558bd4d5db-7wcqt" podUID=46c03ad2-6959-421b-83b5-f2f596fc6ec6
	Aug 16 22:14:51 pause-20210816221349-6487 kubelet[1598]: E0816 22:14:51.871688    1598 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d\": RecentStats: unable to find data in memory cache]"
	Aug 16 22:15:01 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:01.922738    1598 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d\": RecentStats: unable to find data in memory cache]"
	Aug 16 22:15:11 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:11.972524    1598 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d\": RecentStats: unable to find data in memory cache]"
	Aug 16 22:15:14 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:14.894667    1598 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0(be341e6f8e3fce99f8e3daffabf09cb4c686dccbf7bd4c1da6274a100d9bb1d5): failed to set bridge addr: could not add IP address to \"cni0\": permission denied"
	Aug 16 22:15:14 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:14.894750    1598 kuberuntime_sandbox.go:68] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0(be341e6f8e3fce99f8e3daffabf09cb4c686dccbf7bd4c1da6274a100d9bb1d5): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-7wcqt"
	Aug 16 22:15:14 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:14.894785    1598 kuberuntime_manager.go:790] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0(be341e6f8e3fce99f8e3daffabf09cb4c686dccbf7bd4c1da6274a100d9bb1d5): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-7wcqt"
	Aug 16 22:15:14 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:14.894873    1598 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-558bd4d5db-7wcqt_kube-system(46c03ad2-6959-421b-83b5-f2f596fc6ec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-558bd4d5db-7wcqt_kube-system(46c03ad2-6959-421b-83b5-f2f596fc6ec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0(be341e6f8e3fce99f8e3daffabf09cb4c686dccbf7bd4c1da6274a100d9bb1d5): failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied\"" pod="kube-system/coredns-558bd4d5db-7wcqt" podUID=46c03ad2-6959-421b-83b5-f2f596fc6ec6
	Aug 16 22:15:22 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:22.025359    1598 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d\": RecentStats: unable to find data in memory cache]"
	Aug 16 22:15:32 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:32.078267    1598 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d\": RecentStats: unable to find data in memory cache]"
	Aug 16 22:15:33 pause-20210816221349-6487 kubelet[1598]: W0816 22:15:33.925897    1598 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/var/run/crio/crio.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory". Reconnecting...
	Aug 16 22:15:33 pause-20210816221349-6487 kubelet[1598]: W0816 22:15:33.925902    1598 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/var/run/crio/crio.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory". Reconnecting...
	Aug 16 22:15:34 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:34.735160    1598 remote_runtime.go:207] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="nil"
	Aug 16 22:15:34 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:34.735229    1598 kuberuntime_sandbox.go:223] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Aug 16 22:15:34 pause-20210816221349-6487 kubelet[1598]: E0816 22:15:34.735255    1598 generic.go:205] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Aug 16 22:15:36 pause-20210816221349-6487 kubelet[1598]: W0816 22:15:36.372291    1598 container.go:586] Failed to update stats for container "/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d": /sys/fs/cgroup/cpuset/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/docker/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/cpuset.cpus found to be empty, continuing to push stats
	Aug 16 22:15:37 pause-20210816221349-6487 kubelet[1598]: I0816 22:15:37.567369    1598 topology_manager.go:187] "Topology Admit Handler"
	Aug 16 22:15:37 pause-20210816221349-6487 kubelet[1598]: I0816 22:15:37.668799    1598 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9dq6\" (UniqueName: \"kubernetes.io/projected/1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b-kube-api-access-p9dq6\") pod \"storage-provisioner\" (UID: \"1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b\") "
	Aug 16 22:15:37 pause-20210816221349-6487 kubelet[1598]: I0816 22:15:37.668851    1598 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b-tmp\") pod \"storage-provisioner\" (UID: \"1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b\") "
	Aug 16 22:15:39 pause-20210816221349-6487 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 16 22:15:39 pause-20210816221349-6487 systemd[1]: kubelet.service: Succeeded.
	Aug 16 22:15:39 pause-20210816221349-6487 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee] <==
	* I0816 22:15:38.165511       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 22:15:38.173044       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 22:15:38.173092       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 22:15:38.180640       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 22:15:38.180766       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20210816221349-6487_6779ff04-c08a-4002-a9c5-68bc190dcd15!
	I0816 22:15:38.180706       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4ed80fce-ba59-4042-adeb-a8987870e830", APIVersion:"v1", ResourceVersion:"509", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20210816221349-6487_6779ff04-c08a-4002-a9c5-68bc190dcd15 became leader
	I0816 22:15:38.280916       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20210816221349-6487_6779ff04-c08a-4002-a9c5-68bc190dcd15!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:19:11.066134  217366 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: "\n** stderr ** \nError from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestPause/serial/VerifyStatus (95.54s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (19.82s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:107: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210816221349-6487 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/PauseAgain
pause_test.go:107: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-20210816221349-6487 --alsologtostderr -v=5: exit status 80 (5.591319686s)

                                                
                                                
-- stdout --
	* Pausing node pause-20210816221349-6487 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 22:19:12.195893  225939 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:19:12.196020  225939 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:19:12.196026  225939 out.go:311] Setting ErrFile to fd 2...
	I0816 22:19:12.196030  225939 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:19:12.196189  225939 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:19:12.196388  225939 out.go:305] Setting JSON to false
	I0816 22:19:12.196412  225939 mustload.go:65] Loading cluster: pause-20210816221349-6487
	I0816 22:19:12.196725  225939 config.go:177] Loaded profile config "pause-20210816221349-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 22:19:12.197121  225939 cli_runner.go:115] Run: docker container inspect pause-20210816221349-6487 --format={{.State.Status}}
	I0816 22:19:12.241489  225939 host.go:66] Checking if "pause-20210816221349-6487" exists ...
	I0816 22:19:12.242153  225939 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-20210816221349-6487 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0816 22:19:12.244667  225939 out.go:177] * Pausing node pause-20210816221349-6487 ... 
	I0816 22:19:12.244698  225939 host.go:66] Checking if "pause-20210816221349-6487" exists ...
	I0816 22:19:12.244958  225939 ssh_runner.go:149] Run: systemctl --version
	I0816 22:19:12.244999  225939 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210816221349-6487
	I0816 22:19:12.287482  225939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32901 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210816221349-6487/id_rsa Username:docker}
	I0816 22:19:12.383965  225939 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:19:12.392766  225939 pause.go:50] kubelet running: true
	I0816 22:19:12.392835  225939 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0816 22:19:17.564712  225939 ssh_runner.go:189] Completed: sudo systemctl disable --now kubelet: (5.171850477s)
	I0816 22:19:17.564761  225939 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0816 22:19:17.564842  225939 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0816 22:19:17.644348  225939 cri.go:76] found id: "2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee"
	I0816 22:19:17.644371  225939 cri.go:76] found id: "a3847cf5a7a0a50db95228eb59c9b2f0fd2f919f5d3ca5657533960a22d80eeb"
	I0816 22:19:17.644379  225939 cri.go:76] found id: "ba2e9dd72df0118186b262b3a681bbc49788610178d61aaf44d6d85a7e156870"
	I0816 22:19:17.644385  225939 cri.go:76] found id: "1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04"
	I0816 22:19:17.644391  225939 cri.go:76] found id: "1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe"
	I0816 22:19:17.644397  225939 cri.go:76] found id: "8a5626e3acb8d6e6512e7bc8e176d7f98cbab112b7311beb2017dc277e1b04c1"
	I0816 22:19:17.644400  225939 cri.go:76] found id: "e812d329ba697b1e1640c0e4b259cdaef06f8cb4498e9c761789a26059eb18ef"
	I0816 22:19:17.644404  225939 cri.go:76] found id: "a65e43c156f4fceb68516be869bf03e5d844bd50bf1c543882409696da7e44b7"
	I0816 22:19:17.644408  225939 cri.go:76] found id: ""
	I0816 22:19:17.644439  225939 ssh_runner.go:149] Run: sudo runc list -f json
	I0816 22:19:17.687032  225939 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04","pid":2096,"status":"running","bundle":"/run/containers/storage/overlay-containers/1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04/userdata","rootfs":"/var/lib/containers/storage/overlay/5e3efe8268ff35cf0dd8a0eea598244b53cd42e17db8c35176abaf307881152b/merged","created":"2021-08-16T22:14:39.86033792Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"206453cd","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"206453cd\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationM
essagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:14:39.71358247Z","io.kubernetes.cri-o.Image":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.21.3","io.kubernetes.cri-o.ImageRef":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-njz9n\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a67f4cc2-55b9-43ee-a73c-16467b872fa0\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-njz9n_a67f4cc2-55b9-43ee-a73c-16467b872fa0/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/con
tainers/storage/overlay/5e3efe8268ff35cf0dd8a0eea598244b53cd42e17db8c35176abaf307881152b/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-njz9n_kube-system_a67f4cc2-55b9-43ee-a73c-16467b872fa0_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-njz9n_kube-system_a67f4cc2-55b9-43ee-a73c-16467b872fa0_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/
a67f4cc2-55b9-43ee-a73c-16467b872fa0/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a67f4cc2-55b9-43ee-a73c-16467b872fa0/containers/kube-proxy/a9e20a41\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/a67f4cc2-55b9-43ee-a73c-16467b872fa0/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/a67f4cc2-55b9-43ee-a73c-16467b872fa0/volumes/kubernetes.io~projected/kube-api-access-xt74x\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-njz9n","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a67f4cc2-55b9-43ee-a73c-16467b872fa0","kubernetes.io/config.seen":"2021-08-16T22:14:39.139237266Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.T
imeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe","pid":1330,"status":"running","bundle":"/run/containers/storage/overlay-containers/1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe/userdata","rootfs":"/var/lib/containers/storage/overlay/4c457105260fdbf1df736e6c94ff5078723ae934a8b810f9d4b328bbf0117cf1/merged","created":"2021-08-16T22:14:11.184120369Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ef5f3481","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ef5f3481\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\
":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:14:10.919160303Z","io.kubernetes.cri-o.Image":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.13-0","io.kubernetes.cri-o.ImageRef":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-20210816221349-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"80302e95ebcb53cf62a48fa24997db61\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210816221349-6487_80302e95ebcb53cf62a48fa24997db61/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage
/overlay/4c457105260fdbf1df736e6c94ff5078723ae934a8b810f9d4b328bbf0117cf1/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-20210816221349-6487_kube-system_80302e95ebcb53cf62a48fa24997db61_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-20210816221349-6487_kube-system_80302e95ebcb53cf62a48fa24997db61_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/80302e95ebcb53cf62a48fa24997db61/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/80302e95ebcb53cf62a48fa24997db61/conta
iners/etcd/35927707\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"80302e95ebcb53cf62a48fa24997db61","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"80302e95ebcb53cf62a48fa24997db61","kubernetes.io/config.seen":"2021-08-16T22:14:07.022751207Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-cont
ainers/2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee/userdata","rootfs":"/var/lib/containers/storage/overlay/6f094131e65ab0dac08bd38816173827064bbf3fa41e0cb5285e07440d34effc/merged","created":"2021-08-16T22:15:38.128091963Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"c24abe1f","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"c24abe1f\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee","io.kubernetes.cri-o.ContainerType":"contain
er","io.kubernetes.cri-o.Created":"2021-08-16T22:15:38.02315922Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6f094131e65ab0dac08bd38816173827064bbf3fa41e0cb5285e07440d34effc/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-sys
tem_1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b/containers/storage-provisioner/8e800500\",\"readonly\":false},{\"container_path\":\"/v
ar/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b/volumes/kubernetes.io~projected/kube-api-access-p9dq6\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"v
olumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-16T22:15:37.567005729Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8","pid":2035,"status":"running","bundle":"/run/containers/storage/overlay-containers/4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8/userdata","rootfs":"/var/lib/containers/storage/overlay/01d055712545b80db5c901619ce4bf8bdd945febc35163b63ee7634064e51f98/merged","created":"2021-08-16T22:14:39.564330015Z","annotations":{"controller-revision-hash":"7cdcb64568","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2021-08-16T22:14:39.139237266Z\"}","io.kubernetes.cri-o
.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-njz9n_kube-system_a67f4cc2-55b9-43ee-a73c-16467b872fa0_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:14:39.465199334Z","io.kubernetes.cri-o.HostName":"pause-20210816221349-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-njz9n","io.kubernetes.cri-o.Labels":"{\"controller-revision-hash\":\"7cdcb64568\",\"pod-template-generation\":\"1\",\"k8s-app\":\"kube-proxy\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"a67f4cc2-55b9-43ee-a73c-16467b872fa0\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\"
:\"kube-proxy-njz9n\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-njz9n_a67f4cc2-55b9-43ee-a73c-16467b872fa0/4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-njz9n\",\"uid\":\"a67f4cc2-55b9-43ee-a73c-16467b872fa0\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/01d055712545b80db5c901619ce4bf8bdd945febc35163b63ee7634064e51f98/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-njz9n_kube-system_a67f4cc2-55b9-43ee-a73c-16467b872fa0_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.Sandbo
xID":"4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8/userdata/shm","io.kubernetes.pod.name":"kube-proxy-njz9n","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"a67f4cc2-55b9-43ee-a73c-16467b872fa0","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-08-16T22:14:39.139237266Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1","pid":3627,"status":"running","bundle":"/run/containers/storage/overlay-containers/4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1/userdata","rootfs":"/var/lib/containers/storage/overlay/f5a1462570c96e48a6d73c2acbeda0ea2aa03a6860638955e3cf463eaa274a
8a/merged","created":"2021-08-16T22:15:37.972189986Z","annotations":{"addonmanager.kubernetes.io/mode":"Reconcile","integration-test":"storage-provisioner","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"addonmanager.kubernetes.io/mode\\\":\\\"Reconcile\\\",\\\"integration-test\\\":\\\"storage-provisioner\\\"},\\\"name\\\":\\\"storage-provisioner\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/storage-provisioner\\\"],\\\"image\\\":\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"storage-provisioner\\\",\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"storage-provisioner\\\",\\\"volu
mes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/tmp\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"tmp\\\"}]}}\\n\",\"kubernetes.io/config.seen\":\"2021-08-16T22:15:37.567005729Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1","io.kubernetes.cri-o.ContainerName":"k8s_POD_storage-provisioner_kube-system_1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:15:37.882238552Z","io.kubernetes.cri-o.HostName":"pause-20210816221349-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"storage-provisioner","io.kubernetes.cri-o.Labels":"{\"integration-test\":\"storage-p
rovisioner\",\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"storage-provisioner\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b/4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"uid\":\"1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f5a1462570c96e48a6d73c2acbeda0ea2aa03a6860638955e3cf463eaa274a8a/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_kube-system_1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.
PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1/userdata/shm","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"1d0ef4e0-4e8b-4e81-ab4f-f1993f646f7b","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"com
mand\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-16T22:15:37.567005729Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df","pid":1171,"status":"running","bundle":"/run/containers/storage/overlay-containers/5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df/userdata","rootfs":"/var/lib/containers/storage/overlay/9cdc20954a09c5065fbf922db548413d8f6b6fa0ef8d7eac1f6e00d3cbe14840/merged","created":"2021-08-16T22:14:08.696695162Z","annotations":{"component":"kube-controller-mana
ger","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"11aa4bce4217eb6f1cd4eeaf87e646ed\",\"kubernetes.io/config.seen\":\"2021-08-16T22:14:07.022772584Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-pause-20210816221349-6487_kube-system_11aa4bce4217eb6f1cd4eeaf87e646ed_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:14:08.544105397Z","io.kubernetes.cri-o.HostName":"pause-20210816221349-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kub
e-controller-manager-pause-20210816221349-6487","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-controller-manager\",\"io.kubernetes.container.name\":\"POD\",\"tier\":\"control-plane\",\"io.kubernetes.pod.uid\":\"11aa4bce4217eb6f1cd4eeaf87e646ed\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210816221349-6487\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210816221349-6487_11aa4bce4217eb6f1cd4eeaf87e646ed/5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-pause-20210816221349-6487\",\"uid\":\"11aa4bce4217eb6f1cd4eeaf87e646ed\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9cdc20954a09c5065fbf922db548413d8f6b6fa0ef8d7eac1f6e00d3cbe14840/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-pause-20210816221349-6487_kube-system_11aa4bce4217eb6f1cd4e
eaf87e646ed_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"11aa4bce4217eb6f1cd4eeaf87e646ed","kubernetes.io/config.hash":"11aa4bce4217eb6f1cd4eeaf87e646ed","kubernetes.io/config.seen":"2021-08-16T22:14:07.02
2772584Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8a5626e3acb8d6e6512e7bc8e176d7f98cbab112b7311beb2017dc277e1b04c1","pid":1328,"status":"running","bundle":"/run/containers/storage/overlay-containers/8a5626e3acb8d6e6512e7bc8e176d7f98cbab112b7311beb2017dc277e1b04c1/userdata","rootfs":"/var/lib/containers/storage/overlay/dcd39b2b9af5fcbb9e04eeae93a4ec590de2fd1d9d936aedf4cd5157d2e55b9f/merged","created":"2021-08-16T22:14:11.184099082Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"9336f224","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"9336f224\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernete
s.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8a5626e3acb8d6e6512e7bc8e176d7f98cbab112b7311beb2017dc277e1b04c1","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:14:10.90734537Z","io.kubernetes.cri-o.Image":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.21.3","io.kubernetes.cri-o.ImageRef":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210816221349-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"11aa4bce4217eb6f1cd4eeaf87e646ed\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manage
r-pause-20210816221349-6487_11aa4bce4217eb6f1cd4eeaf87e646ed/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/dcd39b2b9af5fcbb9e04eeae93a4ec590de2fd1d9d936aedf4cd5157d2e55b9f/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-20210816221349-6487_kube-system_11aa4bce4217eb6f1cd4eeaf87e646ed_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-20210816221349-6487_kube-system_11aa4bce4217eb6f1cd4eeaf87e646ed_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kuber
netes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/11aa4bce4217eb6f1cd4eeaf87e646ed/containers/kube-controller-manager/d86fa3c7\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/11aa4bce4217eb6f1cd4eeaf87e646ed/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":t
rue},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"11aa4bce4217eb6f1cd4eeaf87e646ed","kubernetes.io/config.hash":"11aa4bce4217eb6f1cd4eeaf87e646ed","kubernetes.io/config.seen":"2021-08-16T22:14:07.022772584Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1","pid":1170,"status":"running","bundle":"/run/containers/storage/overlay-containers/97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1/userdata","rootfs":"/var/lib/containers/storage/overlay/aec486b185af1e74ec192dad0697
bf78b5c340f42badf917e697c2a79be83df4/merged","created":"2021-08-16T22:14:08.70867872Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"c4caeeea162ae780eb6bff45a3346bb9\",\"kubernetes.io/config.seen\":\"2021-08-16T22:14:07.022773706Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-pause-20210816221349-6487_kube-system_c4caeeea162ae780eb6bff45a3346bb9_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:14:08.534025998Z","io.kubernetes.cri-o.HostName":"pause-20210816221349-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb
64dff1d98c1/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-scheduler-pause-20210816221349-6487","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"c4caeeea162ae780eb6bff45a3346bb9\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210816221349-6487\",\"tier\":\"control-plane\",\"component\":\"kube-scheduler\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210816221349-6487_c4caeeea162ae780eb6bff45a3346bb9/97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-pause-20210816221349-6487\",\"uid\":\"c4caeeea162ae780eb6bff45a3346bb9\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/aec486b185af1e74ec192dad0697bf78b5c340f42badf917e697c2a79be83df4/merged","io.kubernetes.cri-o.Name":"k8s_kube-schedu
ler-pause-20210816221349-6487_kube-system_c4caeeea162ae780eb6bff45a3346bb9_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"c4caeeea162ae780eb6bff45a3346bb9","kubernetes.io/config.hash":"c4caeeea162ae780eb6bff45a3346bb
9","kubernetes.io/config.seen":"2021-08-16T22:14:07.022773706Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65","pid":1183,"status":"running","bundle":"/run/containers/storage/overlay-containers/a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65/userdata","rootfs":"/var/lib/containers/storage/overlay/a48846504bf19b36168fef627796d07b926ea4b09c6916617bfaf91ef98e28ac/merged","created":"2021-08-16T22:14:08.712573864Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"06505179e0af316cfe7c1c0c3697c38d\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.49.2:8443\",\"kubernetes.io/config.seen\":\"2021-08-16T22:14:07.022771014
Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-pause-20210816221349-6487_kube-system_06505179e0af316cfe7c1c0c3697c38d_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:14:08.538885494Z","io.kubernetes.cri-o.HostName":"pause-20210816221349-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-pause-20210816221349-6487","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"06505179e0af316cfe7c1c0c3697c38d\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210816221349-6487\",\"component\":\"kube-apiserver\",\
"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210816221349-6487_06505179e0af316cfe7c1c0c3697c38d/a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-pause-20210816221349-6487\",\"uid\":\"06505179e0af316cfe7c1c0c3697c38d\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a48846504bf19b36168fef627796d07b926ea4b09c6916617bfaf91ef98e28ac/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-pause-20210816221349-6487_kube-system_06505179e0af316cfe7c1c0c3697c38d_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a055e0d3dc6de9710363847b318bc6681a78c24731d7a2881
85b775afb8a0c65/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"06505179e0af316cfe7c1c0c3697c38d","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"06505179e0af316cfe7c1c0c3697c38d","kubernetes.io/config.seen":"2021-08-16T22:14:07.022771014Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a3847cf5a7a0a50db95228eb59c9b2f0fd2f919f5d3ca5657533960a22d80eeb","pid":2757,"status":"
running","bundle":"/run/containers/storage/overlay-containers/a3847cf5a7a0a50db95228eb59c9b2f0fd2f919f5d3ca5657533960a22d80eeb/userdata","rootfs":"/var/lib/containers/storage/overlay/1fc6b28f49f8ff64723ac65fe0fb7ef7e99e0ba3a02327ce4e06febd83fc2027/merged","created":"2021-08-16T22:15:29.976145781Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"51fdf088","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"51fdf088\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP
\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a3847cf5a7a0a50db95228eb59c9b2f0fd2f919f5d3ca5657533960a22d80eeb","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:15:29.849205018Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns/coredns:v1.8.0","io.kubernetes.cri-o.ImageRef":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes
.pod.name\":\"coredns-558bd4d5db-7wcqt\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"46c03ad2-6959-421b-83b5-f2f596fc6ec6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-7wcqt_46c03ad2-6959-421b-83b5-f2f596fc6ec6/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1fc6b28f49f8ff64723ac65fe0fb7ef7e99e0ba3a02327ce4e06febd83fc2027/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7","io.kubernetes.cri-o.SandboxName":"k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0","io.kubernetes.cri-o.SeccompProfilePa
th":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/46c03ad2-6959-421b-83b5-f2f596fc6ec6/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/46c03ad2-6959-421b-83b5-f2f596fc6ec6/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/46c03ad2-6959-421b-83b5-f2f596fc6ec6/containers/coredns/1be8c54d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/46c03ad2-6959-421b-83b5-f2f596fc6ec6/volumes/kubernetes.io~projected/kube-api-access-f2tgp\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-558bd4d5db-7wcqt","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod
.uid":"46c03ad2-6959-421b-83b5-f2f596fc6ec6","kubernetes.io/config.seen":"2021-08-16T22:14:39.654892580Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a65e43c156f4fceb68516be869bf03e5d844bd50bf1c543882409696da7e44b7","pid":1308,"status":"running","bundle":"/run/containers/storage/overlay-containers/a65e43c156f4fceb68516be869bf03e5d844bd50bf1c543882409696da7e44b7/userdata","rootfs":"/var/lib/containers/storage/overlay/b8777fc30b98adde9a9b64714b92d54367d88cae0a83147e69c916cdadda2d34/merged","created":"2021-08-16T22:14:11.184083062Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"bde20ce","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Ann
otations":"{\"io.kubernetes.container.hash\":\"bde20ce\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a65e43c156f4fceb68516be869bf03e5d844bd50bf1c543882409696da7e44b7","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:14:10.895038631Z","io.kubernetes.cri-o.Image":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.21.3","io.kubernetes.cri-o.ImageRef":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210816221349-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c4caeeea162ae780eb6bff45
a3346bb9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210816221349-6487_c4caeeea162ae780eb6bff45a3346bb9/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b8777fc30b98adde9a9b64714b92d54367d88cae0a83147e69c916cdadda2d34/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-20210816221349-6487_kube-system_c4caeeea162ae780eb6bff45a3346bb9_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-20210816221349-6487_kube-system_c4caeeea162ae780eb6bff45a3346bb9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.k
ubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c4caeeea162ae780eb6bff45a3346bb9/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c4caeeea162ae780eb6bff45a3346bb9/containers/kube-scheduler/8abcee8c\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c4caeeea162ae780eb6bff45a3346bb9","kubernetes.io/config.hash":"c4caeeea162ae780eb6bff45a3346bb9","kubernetes.io/config.seen":"2021-08-16T22:14:07.022773706Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"
1.0.2-dev","id":"ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7","pid":2726,"status":"running","bundle":"/run/containers/storage/overlay-containers/ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7/userdata","rootfs":"/var/lib/containers/storage/overlay/6e104fa91127cd895f78f5e754f5764f48aa7d4066943f77ae5f2ad85ab27fa0/merged","created":"2021-08-16T22:15:29.78822534Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-16T22:14:39.654892580Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"veth4a11b195\",\"mac\":\"ba:a0:a0:57:03:38\"},{\"name\":\"eth0\",\"mac\":\"f6:40:49:70:a6:2e\",\"sandbox\":\"/var/run/netns/72967a55-409d-4e22-a50b-fe735e218d4f\"}],\"ips\":[{\"version\":\"4\",\"interface\":1,\"address\":\"10.244.0.2/24\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\"}],\"dn
s\":{}}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:15:29.642943749Z","io.kubernetes.cri-o.HostName":"coredns-558bd4d5db-7wcqt","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-7wcqt","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"46c03ad2-6959-421b-83b5-f2f596fc6ec6\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-7wcqt\",\"pod-template-hash\":\"558bd4d5db\",\"k8s-app\":\"kube-dns\",\"io.kube
rnetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-7wcqt_46c03ad2-6959-421b-83b5-f2f596fc6ec6/ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-558bd4d5db-7wcqt\",\"uid\":\"46c03ad2-6959-421b-83b5-f2f596fc6ec6\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6e104fa91127cd895f78f5e754f5764f48aa7d4066943f77ae5f2ad85ab27fa0/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-7wcqt_kube-system_46c03ad2-6959-421b-83b5-f2f596fc6ec6_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.ku
bernetes.cri-o.SandboxID":"ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-7wcqt","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"46c03ad2-6959-421b-83b5-f2f596fc6ec6","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-16T22:14:39.654892580Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ba2e9dd72df0118186b262b3a681bbc49788610178d61aaf44d6d85a7e156870","pid":2120,"status":"running","bundle":"/run/containers/storage/overlay-containers/ba2e9dd72df0118186b262b3a681bbc49788610178d61aaf44d6d85a7e156870/userdata","rootfs":"/var/lib/containers/storage/overlay/ba43f604db681f912becca341198bac5
2232145f6978e44ac056450178082df4/merged","created":"2021-08-16T22:14:39.964738228Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"39f7c29e","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"39f7c29e\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ba2e9dd72df0118186b262b3a681bbc49788610178d61aaf44d6d85a7e156870","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:14:39.737372629Z","io.kubernetes.cri-o.Image":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.ku
bernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20210326-1e038dc5","io.kubernetes.cri-o.ImageRef":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-gqxwk\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"e4d14ead-1ca3-48aa-aafd-4199981ea73a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-gqxwk_e4d14ead-1ca3-48aa-aafd-4199981ea73a/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/ba43f604db681f912becca341198bac52232145f6978e44ac056450178082df4/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-gqxwk_kube-system_e4d14ead-1ca3-48aa-aafd-4199981ea73a_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b/userdata/resolv.conf","io.ku
bernetes.cri-o.SandboxID":"d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-gqxwk_kube-system_e4d14ead-1ca3-48aa-aafd-4199981ea73a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/e4d14ead-1ca3-48aa-aafd-4199981ea73a/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/e4d14ead-1ca3-48aa-aafd-4199981ea73a/containers/kindnet-cni/88a22482\",\"readonly\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernet
es.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/e4d14ead-1ca3-48aa-aafd-4199981ea73a/volumes/kubernetes.io~projected/kube-api-access-cfvz7\",\"readonly\":true}]","io.kubernetes.pod.name":"kindnet-gqxwk","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"e4d14ead-1ca3-48aa-aafd-4199981ea73a","kubernetes.io/config.seen":"2021-08-16T22:14:39.135166345Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292","pid":1191,"status":"running","bundle":"/run/containers/storage/overlay-containers/c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292/userdata","rootfs":"/var/lib/containers/storage/overlay/30b63de48fe4ab34e75dc2f244ebcb7ebcf1a95e023f36297b0dd21e309a7f35/merged","created":"2021-08-16T22:14:08.737155739Z","annot
ations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.49.2:2379\",\"kubernetes.io/config.seen\":\"2021-08-16T22:14:07.022751207Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"80302e95ebcb53cf62a48fa24997db61\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-pause-20210816221349-6487_kube-system_80302e95ebcb53cf62a48fa24997db61_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:14:08.546986208Z","io.kubernetes.cri-o.HostName":"pause-20210816221349-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292/userdata/hostname"
,"io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-pause-20210816221349-6487","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"etcd-pause-20210816221349-6487\",\"component\":\"etcd\",\"tier\":\"control-plane\",\"io.kubernetes.pod.uid\":\"80302e95ebcb53cf62a48fa24997db61\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210816221349-6487_80302e95ebcb53cf62a48fa24997db61/c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-pause-20210816221349-6487\",\"uid\":\"80302e95ebcb53cf62a48fa24997db61\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/30b63de48fe4ab34e75dc2f244ebcb7ebcf1a95e023f36297b0dd21e309a7f35/merged","io.kubernetes.cri-o.Name":"k8s_etcd-pause-20210816221349-6487_kube-system_80302e95ebcb53cf62a48fa24997db61_0","io.kubernet
es.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292/userdata/shm","io.kubernetes.pod.name":"etcd-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"80302e95ebcb53cf62a48fa24997db61","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"80302e95ebcb53cf62a48fa24997db61","kubernetes.io/con
fig.seen":"2021-08-16T22:14:07.022751207Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b","pid":2028,"status":"running","bundle":"/run/containers/storage/overlay-containers/d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b/userdata","rootfs":"/var/lib/containers/storage/overlay/b12afb63786aea20fd73b7ea7901fc04cd5bda0180622c3c4a04f24c3c4a0c62/merged","created":"2021-08-16T22:14:39.612071417Z","annotations":{"app":"kindnet","controller-revision-hash":"694b6fb659","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-16T22:14:39.135166345Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b","io.kub
ernetes.cri-o.ContainerName":"k8s_POD_kindnet-gqxwk_kube-system_e4d14ead-1ca3-48aa-aafd-4199981ea73a_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:14:39.462425985Z","io.kubernetes.cri-o.HostName":"pause-20210816221349-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kindnet-gqxwk","io.kubernetes.cri-o.Labels":"{\"controller-revision-hash\":\"694b6fb659\",\"io.kubernetes.pod.uid\":\"e4d14ead-1ca3-48aa-aafd-4199981ea73a\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kindnet-gqxwk\",\"k8s-app\":\"kindnet\",\"app\":\"kindnet\",\"tier\":\"node\",\"pod-template-generation\":\"1\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-gqxwk_e4d14
ead-1ca3-48aa-aafd-4199981ea73a/d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-gqxwk\",\"uid\":\"e4d14ead-1ca3-48aa-aafd-4199981ea73a\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b12afb63786aea20fd73b7ea7901fc04cd5bda0180622c3c4a04f24c3c4a0c62/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-gqxwk_kube-system_e4d14ead-1ca3-48aa-aafd-4199981ea73a_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b","io.kubernetes.cri-o.SeccompProfilePa
th":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b/userdata/shm","io.kubernetes.pod.name":"kindnet-gqxwk","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"e4d14ead-1ca3-48aa-aafd-4199981ea73a","k8s-app":"kindnet","kubernetes.io/config.seen":"2021-08-16T22:14:39.135166345Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1","tier":"node"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e812d329ba697b1e1640c0e4b259cdaef06f8cb4498e9c761789a26059eb18ef","pid":1341,"status":"running","bundle":"/run/containers/storage/overlay-containers/e812d329ba697b1e1640c0e4b259cdaef06f8cb4498e9c761789a26059eb18ef/userdata","rootfs":"/var/lib/containers/storage/overlay/6d024a967569bd77936fb53bcfe77def7cd373ca66f8ed4e541d45c068932eae/merged","created":"2021-08-16T22:14:11.184099004Z","annotations":{"io.container.manager":"cri-o","
io.kubernetes.container.hash":"8cf05ddb","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"8cf05ddb\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e812d329ba697b1e1640c0e4b259cdaef06f8cb4498e9c761789a26059eb18ef","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:14:10.927039032Z","io.kubernetes.cri-o.Image":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.21.3","io.kubernetes.cri-o.ImageRef":"3d174f00aa39eb8552a9596610d87ae90e
0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210816221349-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"06505179e0af316cfe7c1c0c3697c38d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210816221349-6487_06505179e0af316cfe7c1c0c3697c38d/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6d024a967569bd77936fb53bcfe77def7cd373ca66f8ed4e541d45c068932eae/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-20210816221349-6487_kube-system_06505179e0af316cfe7c1c0c3697c38d_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a055e0d3dc6de9710363847b318bc668
1a78c24731d7a288185b775afb8a0c65","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-20210816221349-6487_kube-system_06505179e0af316cfe7c1c0c3697c38d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/06505179e0af316cfe7c1c0c3697c38d/containers/kube-apiserver/c36896cf\",\"readonly\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/06505179e0af316cfe7c1c0c3697c38d/etc-hosts\",\"readonly\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_p
ath\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-pause-20210816221349-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"06505179e0af316cfe7c1c0c3697c38d","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"06505179e0af316cfe7c1c0c3697c38d","kubernetes.io/config.seen":"2021-08-16T22:14:07.022771014Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0816 22:19:17.688065  225939 cri.go:113] list returned 16 containers
	I0816 22:19:17.688087  225939 cri.go:116] container: {ID:1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04 Status:running}
	I0816 22:19:17.688106  225939 cri.go:116] container: {ID:1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe Status:running}
	I0816 22:19:17.688113  225939 cri.go:116] container: {ID:2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee Status:stopped}
	I0816 22:19:17.688120  225939 cri.go:122] skipping {2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee stopped}: state = "stopped", want "running"
	I0816 22:19:17.688133  225939 cri.go:116] container: {ID:4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8 Status:running}
	I0816 22:19:17.688139  225939 cri.go:118] skipping 4a42049e953482ab171e6fdcdc10a7c871688cf837213fbfa5ce605fe2f850e8 - not in ps
	I0816 22:19:17.688145  225939 cri.go:116] container: {ID:4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1 Status:running}
	I0816 22:19:17.688156  225939 cri.go:118] skipping 4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1 - not in ps
	I0816 22:19:17.688162  225939 cri.go:116] container: {ID:5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df Status:running}
	I0816 22:19:17.688171  225939 cri.go:118] skipping 5f516d619d78cd802dda4c887a25bf07772f7b56a346f5f2c38093a234fb25df - not in ps
	I0816 22:19:17.688176  225939 cri.go:116] container: {ID:8a5626e3acb8d6e6512e7bc8e176d7f98cbab112b7311beb2017dc277e1b04c1 Status:running}
	I0816 22:19:17.688185  225939 cri.go:116] container: {ID:97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1 Status:running}
	I0816 22:19:17.688193  225939 cri.go:118] skipping 97b975cd86e3b81d25d6a6fcb1665c5483a9bf1ff0e1c936708eb64dff1d98c1 - not in ps
	I0816 22:19:17.688201  225939 cri.go:116] container: {ID:a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65 Status:running}
	I0816 22:19:17.688208  225939 cri.go:118] skipping a055e0d3dc6de9710363847b318bc6681a78c24731d7a288185b775afb8a0c65 - not in ps
	I0816 22:19:17.688213  225939 cri.go:116] container: {ID:a3847cf5a7a0a50db95228eb59c9b2f0fd2f919f5d3ca5657533960a22d80eeb Status:running}
	I0816 22:19:17.688218  225939 cri.go:116] container: {ID:a65e43c156f4fceb68516be869bf03e5d844bd50bf1c543882409696da7e44b7 Status:running}
	I0816 22:19:17.688225  225939 cri.go:116] container: {ID:ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7 Status:running}
	I0816 22:19:17.688236  225939 cri.go:118] skipping ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7 - not in ps
	I0816 22:19:17.688241  225939 cri.go:116] container: {ID:ba2e9dd72df0118186b262b3a681bbc49788610178d61aaf44d6d85a7e156870 Status:running}
	I0816 22:19:17.688248  225939 cri.go:116] container: {ID:c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292 Status:running}
	I0816 22:19:17.688255  225939 cri.go:118] skipping c159e3fd639d7b13a3065ac1388c83de26e0a18c5cb46dc30b2d8ff860ba6292 - not in ps
	I0816 22:19:17.688267  225939 cri.go:116] container: {ID:d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b Status:running}
	I0816 22:19:17.688275  225939 cri.go:118] skipping d5d9684c84cea176d5556f22d5f9f0077520b06488ffe799042586c8ce1bac0b - not in ps
	I0816 22:19:17.688281  225939 cri.go:116] container: {ID:e812d329ba697b1e1640c0e4b259cdaef06f8cb4498e9c761789a26059eb18ef Status:running}
	I0816 22:19:17.688324  225939 ssh_runner.go:149] Run: sudo runc pause 1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04
	I0816 22:19:17.704577  225939 ssh_runner.go:149] Run: sudo runc pause 1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04 1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe
	I0816 22:19:17.721625  225939 out.go:177] 
	W0816 22:19:17.721762  225939 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc pause 1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04 1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:19:17Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc pause 1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04 1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:19:17Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0816 22:19:17.721776  225939 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0816 22:19:17.725599  225939 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0816 22:19:17.727130  225939 out.go:177] 

                                                
                                                
** /stderr **
pause_test.go:109: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-20210816221349-6487 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210816221349-6487
helpers_test.go:236: (dbg) docker inspect pause-20210816221349-6487:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d",
	        "Created": "2021-08-16T22:13:50.947309762Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 180330,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-16T22:13:51.51454931Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/hostname",
	        "HostsPath": "/var/lib/docker/containers/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/hosts",
	        "LogPath": "/var/lib/docker/containers/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d-json.log",
	        "Name": "/pause-20210816221349-6487",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210816221349-6487:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210816221349-6487",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/699c14ab7b6f1c0dacf602df13a2fa2eb5acd9541bc64b4ed2e359bf1166606e-init/diff:/var/lib/docker/overlay2/26ccb05b45c39eb8fb9a222c35182ae0b2281818311893cc1c820d93f21711ae/diff:/var/lib/docker/overlay2/cd717b1332cc33945d2b9d777346faa34d6bdfa47023daf36a4373532eccf421/diff:/var/lib/docker/overlay2/d9b45e6cc99a52e8f81dfe312587e9103ceac6c138634c90ec0d647cd90dcdc6/diff:/var/lib/docker/overlay2/253a269adcaa7871d92fb351ceb4d32421104cd62f741ba1c63d8e3f2e19a0b5/diff:/var/lib/docker/overlay2/42adedc2a706910fdba591e62cbfab57dd03d85b882f6eef38c3e131a0c52bea/diff:/var/lib/docker/overlay2/77ed2008192a16e91da5c6b9ccc6e71cea338a7cb24d331dc695e5f74fb04573/diff:/var/lib/docker/overlay2/29dccf868a296ce0889c929159fd3ac0dcdc00ba6e2ef864b056335954ffab4d/diff:/var/lib/docker/overlay2/592818f0a9ab8c99c2438562d9c5a479083cb08f41ad30ddefa9b493a8098150/diff:/var/lib/docker/overlay2/05fa44df096fbdb60f935bef69e952369e10cd8f1adc570c170c1f08d09743f7/diff:/var/lib/docker/overlay2/65cf48
1c471f231a663edc6331272317a851b0fd3395b2d6be43899f9b5443f9/diff:/var/lib/docker/overlay2/86a49202a7a2abea1a7a57c52f2e19cf991e9319b11dcf7f4105892413e0096e/diff:/var/lib/docker/overlay2/6f1a2a1805e90af512eef163f484935d57b0a0a72a65b5775c38efb5fe937d13/diff:/var/lib/docker/overlay2/f514eb992d78367550d2ce1c020771f984eff047302d6af3dcd71172270c0087/diff:/var/lib/docker/overlay2/cdae383a0302a2ed835985028caad813d6e16c7caf0e146005b47a4a1e94369b/diff:/var/lib/docker/overlay2/1fe02987c4f18db439e026431c44d1b8b233da766c4dcd331a4c20fd91e88748/diff:/var/lib/docker/overlay2/28bb6b828668bb682f3523f9f71fbd429ef6180cd98455625ea014ef4e532b77/diff:/var/lib/docker/overlay2/2878ae01f9dccb219097c2fc5873be677fe8a70e2bd936d25f018483a3d6c1a3/diff:/var/lib/docker/overlay2/da86efa9bad2d8b46c88440b1c4864add391f7cc9ee07d4227dd37d76c9ffd85/diff:/var/lib/docker/overlay2/93bf17ee1f2be70c70aa38b3aea9daa3a62e189181d1aa56acc52f5cf90f7436/diff:/var/lib/docker/overlay2/e8db603cd7ef789cc310bf7782375ee624d80ac0bceb3a3075900e67540c152a/diff:/var/lib/d
ocker/overlay2/1ba1c50d72480f49b835b34db6c3b21bccdf6530a8201631099a7e287dbf94a6/diff:/var/lib/docker/overlay2/2eeb0aecf508bc4a797d240b330bfa5b54b84a085737a96dad9faf9ea129d4ce/diff:/var/lib/docker/overlay2/40ba0dc970df76cce520350594f86384e121b47a190106559752e8f7cf138540/diff:/var/lib/docker/overlay2/8b80e2d0e53fb136919fd9ca7d4d198a92c324a19aa1827fcd672a3aa49a0dcc/diff:/var/lib/docker/overlay2/bd1d10c9a8876242ad61e3841c439177c3ae3a1967e5b053729107676134c622/diff:/var/lib/docker/overlay2/1f65ee5880c0ce6e027f2e72118cb956fcdcba9bb0806d98de6532bcbdfac6dd/diff:/var/lib/docker/overlay2/8bde131df5f26fa8ad65d4ddcb9ddb84de567427ffeb4aee76487489e530ef77/diff:/var/lib/docker/overlay2/73a68cda04f65298e5bd2581f0f16a8de96cd42145968d5c39b0c8ae3228a423/diff:/var/lib/docker/overlay2/0acfb650c0dfd3b9df005fe48f305b155d61d21914271c258e95bf42636e9bf8/diff:/var/lib/docker/overlay2/33e44b42ef88ffe285cb67e240248a629167dd1c83362d51a67679cbd866c30b/diff:/var/lib/docker/overlay2/c71ddf9e0475e4a3987bf3d616723e9d055a79c4bb6d92cf34fc67be2d9
c5134/diff:/var/lib/docker/overlay2/faece61d8cf45bab42cd499befcdb8d4548229311b35600cced068a3ce186025/diff:/var/lib/docker/overlay2/5e45db6ab7620f8baf25ff5ff01c806cd64575cdc89e5bb2965a25800093d5f9/diff:/var/lib/docker/overlay2/7324fe47bf39776cd61539f09f63fbb6f5e17aedc985bcc48bc12cdaaff4c4e4/diff:/var/lib/docker/overlay2/98a50fa9fa33654b2ed4ce8f7118f04d7004260519abca0d403b2ab337e9c273/diff:/var/lib/docker/overlay2/0e38707c109f3f1c54c7190021d525f898a86b07f2e51c86dc3e6d916eff3ed7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/699c14ab7b6f1c0dacf602df13a2fa2eb5acd9541bc64b4ed2e359bf1166606e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/699c14ab7b6f1c0dacf602df13a2fa2eb5acd9541bc64b4ed2e359bf1166606e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/699c14ab7b6f1c0dacf602df13a2fa2eb5acd9541bc64b4ed2e359bf1166606e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20210816221349-6487",
	                "Source": "/var/lib/docker/volumes/pause-20210816221349-6487/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210816221349-6487",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210816221349-6487",
	                "name.minikube.sigs.k8s.io": "pause-20210816221349-6487",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0570bf3c5e1623f8d98964c6c2afad0bc376f97b81690d2719c8fc8bafd98f8c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32901"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32900"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32897"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32899"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0570bf3c5e16",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210816221349-6487": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "859d383b66a4"
	                    ],
	                    "NetworkID": "394b0b68014ce308c4cac60aecb16a91b93630211f90dc3e79f9040bcf6f53a0",
	                    "EndpointID": "66674d2a7391164faa47236ee3755487b5135a367100c27f1e2bc07dde97d027",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210816221349-6487 -n pause-20210816221349-6487
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210816221349-6487 -n pause-20210816221349-6487: exit status 2 (11.211962848s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/PauseAgain FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/PauseAgain]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210816221349-6487 logs -n 25
helpers_test.go:253: TestPause/serial/PauseAgain logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                | kubernetes-upgrade-20210816221144-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:13:24 UTC | Mon, 16 Aug 2021 22:13:46 UTC |
	|         | kubernetes-upgrade-20210816221144-6487            |                                        |         |         |                               |                               |
	|         | --memory=2200                                     |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker            |                                        |         |         |                               |                               |
	|         |  --container-runtime=crio                         |                                        |         |         |                               |                               |
	| delete  | -p                                                | kubernetes-upgrade-20210816221144-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:13:46 UTC | Mon, 16 Aug 2021 22:13:49 UTC |
	|         | kubernetes-upgrade-20210816221144-6487            |                                        |         |         |                               |                               |
	| start   | -p                                                | missing-upgrade-20210816221142-6487    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:13:44 UTC | Mon, 16 Aug 2021 22:14:50 UTC |
	|         | missing-upgrade-20210816221142-6487               |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                        |         |         |                               |                               |
	|         | -v=1 --driver=docker                              |                                        |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                        |         |         |                               |                               |
	| delete  | -p                                                | missing-upgrade-20210816221142-6487    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:14:50 UTC | Mon, 16 Aug 2021 22:14:53 UTC |
	|         | missing-upgrade-20210816221142-6487               |                                        |         |         |                               |                               |
	| start   | -p                                                | force-systemd-env-20210816221453-6487  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:14:53 UTC | Mon, 16 Aug 2021 22:15:24 UTC |
	|         | force-systemd-env-20210816221453-6487             |                                        |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr                   |                                        |         |         |                               |                               |
	|         | -v=5 --driver=docker                              |                                        |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                        |         |         |                               |                               |
	| delete  | -p                                                | stopped-upgrade-20210816221221-6487    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:22 UTC | Mon, 16 Aug 2021 22:15:25 UTC |
	|         | stopped-upgrade-20210816221221-6487               |                                        |         |         |                               |                               |
	| delete  | -p                                                | force-systemd-env-20210816221453-6487  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:24 UTC | Mon, 16 Aug 2021 22:15:27 UTC |
	|         | force-systemd-env-20210816221453-6487             |                                        |         |         |                               |                               |
	| delete  | -p kubenet-20210816221527-6487                    | kubenet-20210816221527-6487            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:27 UTC | Mon, 16 Aug 2021 22:15:27 UTC |
	| delete  | -p flannel-20210816221527-6487                    | flannel-20210816221527-6487            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:27 UTC | Mon, 16 Aug 2021 22:15:28 UTC |
	| delete  | -p false-20210816221528-6487                      | false-20210816221528-6487              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:28 UTC | Mon, 16 Aug 2021 22:15:28 UTC |
	| start   | -p pause-20210816221349-6487                      | pause-20210816221349-6487              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:13:49 UTC | Mon, 16 Aug 2021 22:15:32 UTC |
	|         | --memory=2048                                     |                                        |         |         |                               |                               |
	|         | --install-addons=false                            |                                        |         |         |                               |                               |
	|         | --wait=all --driver=docker                        |                                        |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                        |         |         |                               |                               |
	| start   | -p pause-20210816221349-6487                      | pause-20210816221349-6487              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:32 UTC | Mon, 16 Aug 2021 22:15:38 UTC |
	|         | --alsologtostderr                                 |                                        |         |         |                               |                               |
	|         | -v=1 --driver=docker                              |                                        |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                        |         |         |                               |                               |
	| delete  | -p                                                | running-upgrade-20210816221326-6487    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:52 UTC | Mon, 16 Aug 2021 22:15:55 UTC |
	|         | running-upgrade-20210816221326-6487               |                                        |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210816221528-6487    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:28 UTC | Mon, 16 Aug 2021 22:17:12 UTC |
	|         | old-k8s-version-20210816221528-6487               |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                        |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                        |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                        |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                        |         |         |                               |                               |
	|         | --keep-context=false                              |                                        |         |         |                               |                               |
	|         | --driver=docker                                   |                                        |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                        |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210816221528-6487    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:17:22 UTC | Mon, 16 Aug 2021 22:17:22 UTC |
	|         | old-k8s-version-20210816221528-6487               |                                        |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                        |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                        |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210816221528-6487    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:17:22 UTC | Mon, 16 Aug 2021 22:17:43 UTC |
	|         | old-k8s-version-20210816221528-6487               |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                        |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210816221528-6487    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:17:43 UTC | Mon, 16 Aug 2021 22:17:43 UTC |
	|         | old-k8s-version-20210816221528-6487               |                                        |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                        |         |         |                               |                               |
	| start   | -p no-preload-20210816221555-6487                 | no-preload-20210816221555-6487         | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:55 UTC | Mon, 16 Aug 2021 22:17:51 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                        |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                        |         |         |                               |                               |
	|         | --driver=docker                                   |                                        |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                        |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210816221555-6487         | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:17:59 UTC | Mon, 16 Aug 2021 22:17:59 UTC |
	|         | no-preload-20210816221555-6487                    |                                        |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                        |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                        |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210816221555-6487         | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:17:59 UTC | Mon, 16 Aug 2021 22:18:20 UTC |
	|         | no-preload-20210816221555-6487                    |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                        |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210816221555-6487         | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:18:20 UTC | Mon, 16 Aug 2021 22:18:20 UTC |
	|         | no-preload-20210816221555-6487                    |                                        |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                        |         |         |                               |                               |
	| start   | -p                                                | cert-options-20210816221525-6487       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:25 UTC | Mon, 16 Aug 2021 22:19:09 UTC |
	|         | cert-options-20210816221525-6487                  |                                        |         |         |                               |                               |
	|         | --memory=2048                                     |                                        |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                         |                                        |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15                     |                                        |         |         |                               |                               |
	|         | --apiserver-names=localhost                       |                                        |         |         |                               |                               |
	|         | --apiserver-names=www.google.com                  |                                        |         |         |                               |                               |
	|         | --apiserver-port=8555                             |                                        |         |         |                               |                               |
	|         | --driver=docker                                   |                                        |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                        |         |         |                               |                               |
	| -p      | cert-options-20210816221525-6487                  | cert-options-20210816221525-6487       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:09 UTC | Mon, 16 Aug 2021 22:19:10 UTC |
	|         | ssh openssl x509 -text -noout -in                 |                                        |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt             |                                        |         |         |                               |                               |
	| unpause | -p pause-20210816221349-6487                      | pause-20210816221349-6487              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:11 UTC | Mon, 16 Aug 2021 22:19:12 UTC |
	|         | --alsologtostderr -v=5                            |                                        |         |         |                               |                               |
	| delete  | -p                                                | cert-options-20210816221525-6487       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:10 UTC | Mon, 16 Aug 2021 22:19:13 UTC |
	|         | cert-options-20210816221525-6487                  |                                        |         |         |                               |                               |
	|---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:19:13
	Running on machine: debian-jenkins-agent-13
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:19:13.301704  226196 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:19:13.301796  226196 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:19:13.301806  226196 out.go:311] Setting ErrFile to fd 2...
	I0816 22:19:13.301810  226196 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:19:13.301930  226196 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:19:13.302239  226196 out.go:305] Setting JSON to false
	I0816 22:19:13.339656  226196 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-13","uptime":3520,"bootTime":1629148833,"procs":338,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:19:13.339756  226196 start.go:121] virtualization: kvm guest
	I0816 22:19:13.342422  226196 out.go:177] * [embed-certs-20210816221913-6487] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:19:13.343968  226196 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:19:13.342577  226196 notify.go:169] Checking for updates...
	I0816 22:19:13.345536  226196 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:19:13.346956  226196 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:19:13.348437  226196 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:19:13.348908  226196 config.go:177] Loaded profile config "no-preload-20210816221555-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0816 22:19:13.349030  226196 config.go:177] Loaded profile config "old-k8s-version-20210816221528-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0816 22:19:13.349129  226196 config.go:177] Loaded profile config "pause-20210816221349-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 22:19:13.349176  226196 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:19:13.398668  226196 docker.go:132] docker version: linux-19.03.15
	I0816 22:19:13.398777  226196 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:19:13.479313  226196 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-16 22:19:13.43527811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 22:19:13.479459  226196 docker.go:244] overlay module found
	I0816 22:19:13.481616  226196 out.go:177] * Using the docker driver based on user configuration
	I0816 22:19:13.481642  226196 start.go:278] selected driver: docker
	I0816 22:19:13.481648  226196 start.go:751] validating driver "docker" against <nil>
	I0816 22:19:13.481666  226196 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0816 22:19:13.481725  226196 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:19:13.481746  226196 out.go:242] ! Your cgroup does not allow setting memory.
	I0816 22:19:13.485128  226196 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:19:13.485959  226196 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:19:13.569882  226196 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-16 22:19:13.522702537 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 22:19:13.569990  226196 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0816 22:19:13.570128  226196 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 22:19:13.570148  226196 cni.go:93] Creating CNI manager for ""
	I0816 22:19:13.570160  226196 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:19:13.570168  226196 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 22:19:13.570178  226196 start_flags.go:277] config:
	{Name:embed-certs-20210816221913-6487 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210816221913-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:19:13.572227  226196 out.go:177] * Starting control plane node embed-certs-20210816221913-6487 in cluster embed-certs-20210816221913-6487
	I0816 22:19:13.572272  226196 cache.go:117] Beginning downloading kic base image for docker with crio
	I0816 22:19:09.559512  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:12.058112  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:13.573748  226196 out.go:177] * Pulling base image ...
	I0816 22:19:13.573775  226196 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0816 22:19:13.573805  226196 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0816 22:19:13.573824  226196 cache.go:56] Caching tarball of preloaded images
	I0816 22:19:13.573877  226196 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0816 22:19:13.574001  226196 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 22:19:13.574019  226196 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0816 22:19:13.574131  226196 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/config.json ...
	I0816 22:19:13.574156  226196 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/config.json: {Name:mk837e8194ab2b2a82bccf8a9a9a0d624adb0134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:19:13.668793  226196 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0816 22:19:13.668825  226196 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0816 22:19:13.668841  226196 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:19:13.668873  226196 start.go:313] acquiring machines lock for embed-certs-20210816221913-6487: {Name:mkaa6840e29b8ce519208ca05a6868b89ed678ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:19:13.668997  226196 start.go:317] acquired machines lock for "embed-certs-20210816221913-6487" in 106.298µs
	I0816 22:19:13.669022  226196 start.go:89] Provisioning new machine with config: &{Name:embed-certs-20210816221913-6487 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210816221913-6487 Namespace:default APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 22:19:13.669091  226196 start.go:126] createHost starting for "" (driver="docker")
	I0816 22:19:11.060582  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:13.560175  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:15.562233  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:13.671499  226196 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0816 22:19:13.671729  226196 start.go:160] libmachine.API.Create for "embed-certs-20210816221913-6487" (driver="docker")
	I0816 22:19:13.671758  226196 client.go:168] LocalClient.Create starting
	I0816 22:19:13.671842  226196 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem
	I0816 22:19:13.671872  226196 main.go:130] libmachine: Decoding PEM data...
	I0816 22:19:13.671889  226196 main.go:130] libmachine: Parsing certificate...
	I0816 22:19:13.672065  226196 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem
	I0816 22:19:13.672099  226196 main.go:130] libmachine: Decoding PEM data...
	I0816 22:19:13.672117  226196 main.go:130] libmachine: Parsing certificate...
	I0816 22:19:13.672510  226196 cli_runner.go:115] Run: docker network inspect embed-certs-20210816221913-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0816 22:19:13.711967  226196 cli_runner.go:162] docker network inspect embed-certs-20210816221913-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0816 22:19:13.712036  226196 network_create.go:255] running [docker network inspect embed-certs-20210816221913-6487] to gather additional debugging logs...
	I0816 22:19:13.712052  226196 cli_runner.go:115] Run: docker network inspect embed-certs-20210816221913-6487
	W0816 22:19:13.749576  226196 cli_runner.go:162] docker network inspect embed-certs-20210816221913-6487 returned with exit code 1
	I0816 22:19:13.749612  226196 network_create.go:258] error running [docker network inspect embed-certs-20210816221913-6487]: docker network inspect embed-certs-20210816221913-6487: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20210816221913-6487
	I0816 22:19:13.749623  226196 network_create.go:260] output of [docker network inspect embed-certs-20210816221913-6487]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20210816221913-6487
	
	** /stderr **
	I0816 22:19:13.749661  226196 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 22:19:13.788235  226196 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-394b0b68014c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:c7:c7:80:dd}}
	I0816 22:19:13.788882  226196 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-4ed2783b447d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:d5:d5:90:49}}
	I0816 22:19:13.790033  226196 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-98b3ee991257 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:b1:3b:84:51}}
	I0816 22:19:13.791520  226196 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.76.0:0xc000638098] misses:0}
	I0816 22:19:13.791590  226196 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0816 22:19:13.791605  226196 network_create.go:106] attempt to create docker network embed-certs-20210816221913-6487 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0816 22:19:13.791647  226196 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20210816221913-6487
	I0816 22:19:13.861245  226196 network_create.go:90] docker network embed-certs-20210816221913-6487 192.168.76.0/24 created
	I0816 22:19:13.861276  226196 kic.go:106] calculated static IP "192.168.76.2" for the "embed-certs-20210816221913-6487" container
	I0816 22:19:13.861325  226196 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0816 22:19:13.900782  226196 cli_runner.go:115] Run: docker volume create embed-certs-20210816221913-6487 --label name.minikube.sigs.k8s.io=embed-certs-20210816221913-6487 --label created_by.minikube.sigs.k8s.io=true
	I0816 22:19:13.940689  226196 oci.go:102] Successfully created a docker volume embed-certs-20210816221913-6487
	I0816 22:19:13.940770  226196 cli_runner.go:115] Run: docker run --rm --name embed-certs-20210816221913-6487-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20210816221913-6487 --entrypoint /usr/bin/test -v embed-certs-20210816221913-6487:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0816 22:19:14.711637  226196 oci.go:106] Successfully prepared a docker volume embed-certs-20210816221913-6487
	W0816 22:19:14.711693  226196 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0816 22:19:14.711702  226196 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0816 22:19:14.711707  226196 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0816 22:19:14.711735  226196 kic.go:179] Starting extracting preloaded images to volume ...
	I0816 22:19:14.711762  226196 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0816 22:19:14.711822  226196 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20210816221913-6487:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0816 22:19:14.795019  226196 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-20210816221913-6487 --name embed-certs-20210816221913-6487 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20210816221913-6487 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-20210816221913-6487 --network embed-certs-20210816221913-6487 --ip 192.168.76.2 --volume embed-certs-20210816221913-6487:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0816 22:19:15.324335  226196 cli_runner.go:115] Run: docker container inspect embed-certs-20210816221913-6487 --format={{.State.Running}}
	I0816 22:19:15.374298  226196 cli_runner.go:115] Run: docker container inspect embed-certs-20210816221913-6487 --format={{.State.Status}}
	I0816 22:19:15.420245  226196 cli_runner.go:115] Run: docker exec embed-certs-20210816221913-6487 stat /var/lib/dpkg/alternatives/iptables
	I0816 22:19:15.562793  226196 oci.go:278] the created container "embed-certs-20210816221913-6487" has a running status.
	I0816 22:19:15.562828  226196 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa...
	I0816 22:19:15.688176  226196 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0816 22:19:16.059432  226196 cli_runner.go:115] Run: docker container inspect embed-certs-20210816221913-6487 --format={{.State.Status}}
	I0816 22:19:16.100847  226196 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0816 22:19:16.100866  226196 kic_runner.go:115] Args: [docker exec --privileged embed-certs-20210816221913-6487 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0816 22:19:14.058243  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:16.058503  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:18.557254  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:18.372387  226196 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20210816221913-6487:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (3.660521738s)
	I0816 22:19:18.372418  226196 kic.go:188] duration metric: took 3.660680 seconds to extract preloaded images to volume
	I0816 22:19:18.372480  226196 cli_runner.go:115] Run: docker container inspect embed-certs-20210816221913-6487 --format={{.State.Status}}
	I0816 22:19:18.411770  226196 machine.go:88] provisioning docker machine ...
	I0816 22:19:18.411806  226196 ubuntu.go:169] provisioning hostname "embed-certs-20210816221913-6487"
	I0816 22:19:18.411860  226196 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:19:18.450157  226196 main.go:130] libmachine: Using SSH client type: native
	I0816 22:19:18.450335  226196 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32944 <nil> <nil>}
	I0816 22:19:18.450351  226196 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20210816221913-6487 && echo "embed-certs-20210816221913-6487" | sudo tee /etc/hostname
	I0816 22:19:18.603404  226196 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20210816221913-6487
	
	I0816 22:19:18.603477  226196 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:19:18.643743  226196 main.go:130] libmachine: Using SSH client type: native
	I0816 22:19:18.643885  226196 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32944 <nil> <nil>}
	I0816 22:19:18.643922  226196 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20210816221913-6487' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20210816221913-6487/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20210816221913-6487' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:19:18.767211  226196 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:19:18.767256  226196 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem ServerCertRemotePath:/e
tc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:19:18.767319  226196 ubuntu.go:177] setting up certificates
	I0816 22:19:18.767332  226196 provision.go:83] configureAuth start
	I0816 22:19:18.767396  226196 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210816221913-6487
	I0816 22:19:18.807035  226196 provision.go:138] copyHostCerts
	I0816 22:19:18.807086  226196 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:19:18.807096  226196 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:19:18.807144  226196 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:19:18.807216  226196 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:19:18.807226  226196 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:19:18.807241  226196 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0816 22:19:18.807289  226196 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:19:18.807297  226196 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:19:18.807312  226196 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:19:18.807352  226196 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20210816221913-6487 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20210816221913-6487]
	I0816 22:19:18.895657  226196 provision.go:172] copyRemoteCerts
	I0816 22:19:18.895709  226196 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:19:18.895743  226196 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:19:18.936723  226196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32944 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:19:19.026683  226196 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:19:19.042459  226196 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0816 22:19:19.058500  226196 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 22:19:19.073859  226196 provision.go:86] duration metric: configureAuth took 306.513578ms
	I0816 22:19:19.073881  226196 ubuntu.go:193] setting minikube options for container-runtime
	I0816 22:19:19.074033  226196 config.go:177] Loaded profile config "embed-certs-20210816221913-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 22:19:19.074189  226196 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:19:19.114560  226196 main.go:130] libmachine: Using SSH client type: native
	I0816 22:19:19.114725  226196 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32944 <nil> <nil>}
	I0816 22:19:19.114746  226196 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 22:19:19.468077  226196 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 22:19:19.468107  226196 machine.go:91] provisioned docker machine in 1.056316804s
	I0816 22:19:19.468118  226196 client.go:171] LocalClient.Create took 5.796352067s
	I0816 22:19:19.468135  226196 start.go:168] duration metric: libmachine.API.Create for "embed-certs-20210816221913-6487" took 5.796406297s
	I0816 22:19:19.468146  226196 start.go:267] post-start starting for "embed-certs-20210816221913-6487" (driver="docker")
	I0816 22:19:19.468156  226196 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:19:19.468228  226196 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:19:19.468285  226196 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:19:19.509141  226196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32944 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:19:19.599058  226196 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:19:19.601697  226196 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0816 22:19:19.601720  226196 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0816 22:19:19.601729  226196 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0816 22:19:19.601736  226196 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0816 22:19:19.601747  226196 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:19:19.601796  226196 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:19:19.601922  226196 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem -> 64872.pem in /etc/ssl/certs
	I0816 22:19:19.602043  226196 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:19:19.608252  226196 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:19:19.623893  226196 start.go:270] post-start completed in 155.736025ms
	I0816 22:19:19.624252  226196 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210816221913-6487
	I0816 22:19:19.664042  226196 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/config.json ...
	I0816 22:19:19.664257  226196 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 22:19:19.664299  226196 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:19:19.704852  226196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32944 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:19:19.791957  226196 start.go:129] duration metric: createHost completed in 6.122855076s
	I0816 22:19:19.791983  226196 start.go:80] releasing machines lock for "embed-certs-20210816221913-6487", held for 6.122975943s
	I0816 22:19:19.792053  226196 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210816221913-6487
	I0816 22:19:19.831056  226196 ssh_runner.go:149] Run: systemctl --version
	I0816 22:19:19.831098  226196 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:19:19.831117  226196 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:19:19.831149  226196 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:19:19.878381  226196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32944 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:19:19.884046  226196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32944 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:19:19.967872  226196 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0816 22:19:20.006416  226196 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0816 22:19:20.015382  226196 docker.go:153] disabling docker service ...
	I0816 22:19:20.015427  226196 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:19:20.024573  226196 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:19:20.033758  226196 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:19:20.100113  226196 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:19:20.165813  226196 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:19:20.174379  226196 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:19:20.186280  226196 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0816 22:19:20.193811  226196 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0816 22:19:20.193843  226196 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0816 22:19:20.202278  226196 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:19:20.207967  226196 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:19:20.208014  226196 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:19:20.214588  226196 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:19:20.220334  226196 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:19:20.280776  226196 ssh_runner.go:149] Run: sudo systemctl start crio
	I0816 22:19:20.289616  226196 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 22:19:20.289667  226196 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:19:20.292630  226196 start.go:413] Will wait 60s for crictl version
	I0816 22:19:20.292670  226196 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:19:20.319206  226196 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0816 22:19:20.319272  226196 ssh_runner.go:149] Run: crio --version
	I0816 22:19:20.378981  226196 ssh_runner.go:149] Run: crio --version
	I0816 22:19:18.060608  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:20.061387  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:20.442981  226196 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	I0816 22:19:20.443052  226196 cli_runner.go:115] Run: docker network inspect embed-certs-20210816221913-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 22:19:20.481608  226196 ssh_runner.go:149] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0816 22:19:20.484825  226196 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:19:20.493429  226196 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0816 22:19:20.493496  226196 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:19:20.538638  226196 crio.go:424] all images are preloaded for cri-o runtime.
	I0816 22:19:20.538659  226196 crio.go:333] Images already preloaded, skipping extraction
	I0816 22:19:20.538699  226196 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:19:20.562178  226196 crio.go:424] all images are preloaded for cri-o runtime.
	I0816 22:19:20.562197  226196 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:19:20.562255  226196 ssh_runner.go:149] Run: crio config
	I0816 22:19:20.626481  226196 cni.go:93] Creating CNI manager for ""
	I0816 22:19:20.626501  226196 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:19:20.626511  226196 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0816 22:19:20.626525  226196 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20210816221913-6487 NodeName:embed-certs-20210816221913-6487 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/va
r/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:19:20.626672  226196 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "embed-certs-20210816221913-6487"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:19:20.626779  226196 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=embed-certs-20210816221913-6487 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210816221913-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0816 22:19:20.626833  226196 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0816 22:19:20.633579  226196 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:19:20.633631  226196 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:19:20.640034  226196 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (562 bytes)
	I0816 22:19:20.651613  226196 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 22:19:20.663703  226196 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2072 bytes)
	I0816 22:19:20.675324  226196 ssh_runner.go:149] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0816 22:19:20.678037  226196 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:19:20.686319  226196 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487 for IP: 192.168.76.2
	I0816 22:19:20.686361  226196 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:19:20.686382  226196 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:19:20.686460  226196 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/client.key
	I0816 22:19:20.686472  226196 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/client.crt with IP's: []
	I0816 22:19:20.852920  226196 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/client.crt ...
	I0816 22:19:20.852950  226196 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/client.crt: {Name:mkd9c198d4c7f9c8c784093a4ebe740a0ac82674 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:19:20.853172  226196 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/client.key ...
	I0816 22:19:20.853197  226196 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/client.key: {Name:mk89c99383a818365234fa5c6fc15aee0ea06aee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:19:20.853314  226196 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/apiserver.key.31bdca25
	I0816 22:19:20.853325  226196 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0816 22:19:21.046284  226196 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/apiserver.crt.31bdca25 ...
	I0816 22:19:21.046315  226196 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/apiserver.crt.31bdca25: {Name:mka4656174b58e1d143b6f452fb49cb942928021 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:19:21.046486  226196 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/apiserver.key.31bdca25 ...
	I0816 22:19:21.046498  226196 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/apiserver.key.31bdca25: {Name:mk8533245acff5c8fbef6dedc3df9d3eef9bb6ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:19:21.046569  226196 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/apiserver.crt
	I0816 22:19:21.046624  226196 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/apiserver.key
	I0816 22:19:21.046673  226196 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/proxy-client.key
	I0816 22:19:21.046681  226196 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/proxy-client.crt with IP's: []
	I0816 22:19:21.283110  226196 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/proxy-client.crt ...
	I0816 22:19:21.283143  226196 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/proxy-client.crt: {Name:mk02cebfd2674b34a37859dd1d216f5938633a31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:19:21.283316  226196 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/proxy-client.key ...
	I0816 22:19:21.283332  226196 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/proxy-client.key: {Name:mkfe96cd1b7b66e5ace97031a88d9646a295e963 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:19:21.283486  226196 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem (1338 bytes)
	W0816 22:19:21.283521  226196 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487_empty.pem, impossibly tiny 0 bytes
	I0816 22:19:21.283532  226196 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 22:19:21.283557  226196 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:19:21.283590  226196 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:19:21.283615  226196 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0816 22:19:21.283657  226196 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:19:21.284589  226196 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:19:21.313244  226196 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 22:19:21.332732  226196 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:19:21.351082  226196 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 22:19:21.368388  226196 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:19:21.384511  226196 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 22:19:21.400058  226196 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:19:21.414989  226196 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 22:19:21.430650  226196 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:19:21.446235  226196 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem --> /usr/share/ca-certificates/6487.pem (1338 bytes)
	I0816 22:19:21.461586  226196 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /usr/share/ca-certificates/64872.pem (1708 bytes)
	I0816 22:19:21.476849  226196 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:19:21.487759  226196 ssh_runner.go:149] Run: openssl version
	I0816 22:19:21.492008  226196 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:19:21.498387  226196 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:19:21.501108  226196 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:19:21.501154  226196 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:19:21.505926  226196 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:19:21.512772  226196 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6487.pem && ln -fs /usr/share/ca-certificates/6487.pem /etc/ssl/certs/6487.pem"
	I0816 22:19:21.519185  226196 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6487.pem
	I0816 22:19:21.521884  226196 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:50 /usr/share/ca-certificates/6487.pem
	I0816 22:19:21.521918  226196 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6487.pem
	I0816 22:19:21.526199  226196 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6487.pem /etc/ssl/certs/51391683.0"
	I0816 22:19:21.532541  226196 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/64872.pem && ln -fs /usr/share/ca-certificates/64872.pem /etc/ssl/certs/64872.pem"
	I0816 22:19:21.538972  226196 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/64872.pem
	I0816 22:19:21.541746  226196 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:50 /usr/share/ca-certificates/64872.pem
	I0816 22:19:21.541777  226196 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/64872.pem
	I0816 22:19:21.545930  226196 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/64872.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:19:21.552406  226196 kubeadm.go:390] StartCluster: {Name:embed-certs-20210816221913-6487 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210816221913-6487 Namespace:default APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:19:21.552484  226196 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 22:19:21.552533  226196 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:19:21.576024  226196 cri.go:76] found id: ""
	I0816 22:19:21.576083  226196 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:19:21.582400  226196 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:19:21.588531  226196 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0816 22:19:21.588572  226196 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:19:21.594576  226196 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:19:21.594618  226196 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0816 22:19:21.876995  226196 out.go:204]   - Generating certificates and keys ...
	I0816 22:19:20.561242  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:23.058076  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:22.080108  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:24.560472  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:24.328246  226196 out.go:204]   - Booting up control plane ...
	I0816 22:19:25.556694  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:27.557188  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Mon 2021-08-16 22:13:51 UTC, end at Mon 2021-08-16 22:19:29 UTC. --
	Aug 16 22:15:33 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:33.997181209Z" level=info msg="Node configuration value for systemd CollectMode is true"
	Aug 16 22:15:33 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:33.998911132Z" level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.001615431Z" level=info msg="Conmon does support the --sync option"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.001679089Z" level=info msg="No seccomp profile specified, using the internal default"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.001686289Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.006618470Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.009192800Z" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.011666290Z" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.023071034Z" level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.023093501Z" level=warning msg="Default CNI network name kindnet is unchangeable"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.335777529Z" level=info msg="Got pod network &{Name:coredns-558bd4d5db-7wcqt Namespace:kube-system ID:ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7 NetNS:/var/run/netns/72967a55-409d-4e22-a50b-fe735e218d4f Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.336029066Z" level=info msg="About to check CNI network kindnet (type=ptp)"
	Aug 16 22:15:34 pause-20210816221349-6487 systemd[1]: Started Container Runtime Interface for OCI (CRI-O).
	Aug 16 22:15:37 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:37.869390202Z" level=info msg="Running pod sandbox: kube-system/storage-provisioner/POD" id=f027cbdd-a110-4c73-bd17-ec2c625617e8 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.008166152Z" level=info msg="Ran pod sandbox 4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1 with infra container: kube-system/storage-provisioner/POD" id=f027cbdd-a110-4c73-bd17-ec2c625617e8 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.009539171Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3ba0b87b-369b-437f-8093-e60bb62a6e5f name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.010209695Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=3ba0b87b-369b-437f-8093-e60bb62a6e5f name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.010864154Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6a8cdcd8-6743-458d-a5bd-6721cf6293d6 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.011418773Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=6a8cdcd8-6743-458d-a5bd-6721cf6293d6 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.012207175Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b3c53444-4fd4-4816-941c-ae75f44b9aab name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.023341263Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/6f094131e65ab0dac08bd38816173827064bbf3fa41e0cb5285e07440d34effc/merged/etc/passwd: no such file or directory"
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.023470306Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/6f094131e65ab0dac08bd38816173827064bbf3fa41e0cb5285e07440d34effc/merged/etc/group: no such file or directory"
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.144183330Z" level=info msg="Created container 2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee: kube-system/storage-provisioner/storage-provisioner" id=b3c53444-4fd4-4816-941c-ae75f44b9aab name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.144745336Z" level=info msg="Starting container: 2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee" id=b7210b75-09f3-4fd5-8d69-e9dcc6142d7a name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.155275298Z" level=info msg="Started container 2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee: kube-system/storage-provisioner/storage-provisioner" id=b7210b75-09f3-4fd5-8d69-e9dcc6142d7a name=/runtime.v1alpha2.RuntimeService/StartContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	2bd1364ac865c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 minutes ago       Exited              storage-provisioner       0                   4e41a3650a65f
	a3847cf5a7a0a       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   3 minutes ago       Running             coredns                   0                   ace8d49de7551
	ba2e9dd72df01       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb   4 minutes ago       Running             kindnet-cni               0                   d5d9684c84cea
	1b3d3880e345b       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   4 minutes ago       Running             kube-proxy                0                   4a42049e95348
	1b4dd675dc4bc       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   5 minutes ago       Running             etcd                      0                   c159e3fd639d7
	8a5626e3acb8d       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   5 minutes ago       Running             kube-controller-manager   0                   5f516d619d78c
	e812d329ba697       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   5 minutes ago       Running             kube-apiserver            0                   a055e0d3dc6de
	a65e43c156f4f       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   5 minutes ago       Running             kube-scheduler            0                   97b975cd86e3b
	
	* 
	* ==> coredns [a3847cf5a7a0a50db95228eb59c9b2f0fd2f919f5d3ca5657533960a22d80eeb] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20210816221349-6487
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-20210816221349-6487
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
	                    minikube.k8s.io/name=pause-20210816221349-6487
	                    minikube.k8s.io/updated_at=2021_08_16T22_14_26_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Aug 2021 22:14:16 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20210816221349-6487
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Aug 2021 22:15:32 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 16 Aug 2021 22:14:39 +0000   Mon, 16 Aug 2021 22:19:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 16 Aug 2021 22:14:39 +0000   Mon, 16 Aug 2021 22:19:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 16 Aug 2021 22:14:39 +0000   Mon, 16 Aug 2021 22:19:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 16 Aug 2021 22:14:39 +0000   Mon, 16 Aug 2021 22:19:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    pause-20210816221349-6487
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                43a26ced-e56b-4198-bb41-8e54e8862df5
	  Boot ID:                    fb7b5690-fedc-46af-96ea-1f6e59faa09d
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-7wcqt                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m50s
	  kube-system                 etcd-pause-20210816221349-6487                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m58s
	  kube-system                 kindnet-gqxwk                                        100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m50s
	  kube-system                 kube-apiserver-pause-20210816221349-6487             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-controller-manager-pause-20210816221349-6487    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-proxy-njz9n                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 kube-scheduler-pause-20210816221349-6487             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  5m22s (x5 over 5m22s)  kubelet     Node pause-20210816221349-6487 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m22s (x4 over 5m22s)  kubelet     Node pause-20210816221349-6487 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m22s (x4 over 5m22s)  kubelet     Node pause-20210816221349-6487 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m58s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m58s                  kubelet     Node pause-20210816221349-6487 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m58s                  kubelet     Node pause-20210816221349-6487 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m58s                  kubelet     Node pause-20210816221349-6487 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m50s                  kubelet     Node pause-20210816221349-6487 status is now: NodeReady
	  Normal  Starting                 4m49s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4ed2783b447d
	[  +0.000003] ll header: 00000000: 02 42 d5 d5 90 49 02 42 c0 a8 3a 02 08 00        .B...I.B..:...
	[  +0.003943] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4ed2783b447d
	[  +0.000002] ll header: 00000000: 02 42 d5 d5 90 49 02 42 c0 a8 3a 02 08 00        .B...I.B..:...
	[  +0.000057] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4ed2783b447d
	[  +0.000002] ll header: 00000000: 02 42 d5 d5 90 49 02 42 c0 a8 3a 02 08 00        .B...I.B..:...
	[  +8.187375] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4ed2783b447d
	[  +0.000002] ll header: 00000000: 02 42 d5 d5 90 49 02 42 c0 a8 3a 02 08 00        .B...I.B..:...
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4ed2783b447d
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4ed2783b447d
	[  +0.000001] ll header: 00000000: 02 42 d5 d5 90 49 02 42 c0 a8 3a 02 08 00        .B...I.B..:...
	[  +0.000002] ll header: 00000000: 02 42 d5 d5 90 49 02 42 c0 a8 3a 02 08 00        .B...I.B..:...
	[  +1.048379] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev vethb975c587
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff ae e9 bc 5d c0 ce 08 06        .........]....
	[  +0.000537] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth6258e918
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1a 52 65 b5 f7 2a 08 06        .......Re..*..
	[  +4.928013] cgroup: cgroup2: unknown option "nsdelegate"
	[ +20.434413] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth77a7f862
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff a6 7f 02 c1 0b 7c 08 06        ...........|..
	[  +0.312009] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev vethf2148e09
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 62 56 15 39 44 18 08 06        ......bV.9D...
	[  +0.299903] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev veth00213cf6
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff da 76 97 cb ee 26 08 06        .......v...&..
	[  +2.187341] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug16 22:19] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe] <==
	* 2021-08-16 22:19:11.927827 I | embed: rejected connection from "127.0.0.1:34508" (error "write tcp 127.0.0.1:2379->127.0.0.1:34508: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.928582 I | embed: rejected connection from "127.0.0.1:34474" (error "write tcp 127.0.0.1:2379->127.0.0.1:34474: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.929510 I | embed: rejected connection from "127.0.0.1:35136" (error "write tcp 127.0.0.1:2379->127.0.0.1:35136: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.929565 I | embed: rejected connection from "127.0.0.1:35132" (error "write tcp 127.0.0.1:2379->127.0.0.1:35132: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.929787 I | embed: rejected connection from "127.0.0.1:34450" (error "write tcp 127.0.0.1:2379->127.0.0.1:34450: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.929877 I | embed: rejected connection from "127.0.0.1:34448" (error "write tcp 127.0.0.1:2379->127.0.0.1:34448: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.931002 I | embed: rejected connection from "127.0.0.1:34486" (error "write tcp 127.0.0.1:2379->127.0.0.1:34486: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.931153 I | embed: rejected connection from "127.0.0.1:34950" (error "write tcp 127.0.0.1:2379->127.0.0.1:34950: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.932464 I | embed: rejected connection from "127.0.0.1:34860" (error "write tcp 127.0.0.1:2379->127.0.0.1:34860: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.932755 I | embed: rejected connection from "127.0.0.1:35138" (error "write tcp 127.0.0.1:2379->127.0.0.1:35138: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.933381 I | embed: rejected connection from "127.0.0.1:35046" (error "write tcp 127.0.0.1:2379->127.0.0.1:35046: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.933624 I | embed: rejected connection from "127.0.0.1:34968" (error "write tcp 127.0.0.1:2379->127.0.0.1:34968: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.933778 I | embed: rejected connection from "127.0.0.1:35004" (error "write tcp 127.0.0.1:2379->127.0.0.1:35004: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.934215 I | embed: rejected connection from "127.0.0.1:34954" (error "write tcp 127.0.0.1:2379->127.0.0.1:34954: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.935526 I | embed: rejected connection from "127.0.0.1:34922" (error "write tcp 127.0.0.1:2379->127.0.0.1:34922: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.935695 I | embed: rejected connection from "127.0.0.1:34936" (error "write tcp 127.0.0.1:2379->127.0.0.1:34936: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.935730 I | embed: rejected connection from "127.0.0.1:34878" (error "write tcp 127.0.0.1:2379->127.0.0.1:34878: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.936378 I | embed: rejected connection from "127.0.0.1:34848" (error "write tcp 127.0.0.1:2379->127.0.0.1:34848: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.936902 I | embed: rejected connection from "127.0.0.1:35146" (error "write tcp 127.0.0.1:2379->127.0.0.1:35146: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.936977 I | embed: rejected connection from "127.0.0.1:34498" (error "write tcp 127.0.0.1:2379->127.0.0.1:34498: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.937382 I | embed: rejected connection from "127.0.0.1:34952" (error "write tcp 127.0.0.1:2379->127.0.0.1:34952: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.938652 I | embed: rejected connection from "127.0.0.1:35126" (error "write tcp 127.0.0.1:2379->127.0.0.1:35126: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.938770 I | embed: rejected connection from "127.0.0.1:35120" (error "write tcp 127.0.0.1:2379->127.0.0.1:35120: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.939069 I | embed: rejected connection from "127.0.0.1:34920" (error "write tcp 127.0.0.1:2379->127.0.0.1:34920: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.939248 I | embed: rejected connection from "127.0.0.1:34502" (error "write tcp 127.0.0.1:2379->127.0.0.1:34502: write: broken pipe", ServerName "")
	
	* 
	* ==> kernel <==
	*  22:19:29 up 58 min,  0 users,  load average: 2.28, 3.20, 2.29
	Linux pause-20210816221349-6487 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [e812d329ba697b1e1640c0e4b259cdaef06f8cb4498e9c761789a26059eb18ef] <==
	* I0816 22:19:21.842041       1 trace.go:205] Trace[289958795]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (16-Aug-2021 22:18:21.845) (total time: 59996ms):
	Trace[289958795]: [59.996320262s] [59.996320262s] END
	I0816 22:19:22.488231       1 trace.go:205] Trace[29487337]: "GuaranteedUpdate etcd3" type:*core.Node (16-Aug-2021 22:19:02.869) (total time: 19618ms):
	Trace[29487337]: ---"Transaction committed" 19617ms (22:19:00.488)
	Trace[29487337]: [19.618419774s] [19.618419774s] END
	I0816 22:19:22.488617       1 trace.go:205] Trace[1252951637]: "Update" url:/api/v1/nodes/pause-20210816221349-6487/status,user-agent:kube-controller-manager/v1.21.3 (linux/amd64) kubernetes/ca643a4/system:serviceaccount:kube-system:node-controller,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (16-Aug-2021 22:19:02.869) (total time: 19619ms):
	Trace[1252951637]: ---"Object stored in database" 19618ms (22:19:00.488)
	Trace[1252951637]: [19.619034129s] [19.619034129s] END
	I0816 22:19:23.981629       1 trace.go:205] Trace[1675584492]: "List etcd3" key:/resourcequotas/kube-public,resourceVersion:,resourceVersionMatch:,limit:0,continue: (16-Aug-2021 22:18:33.781) (total time: 50199ms):
	Trace[1675584492]: [50.199633862s] [50.199633862s] END
	I0816 22:19:23.981636       1 trace.go:205] Trace[1665405246]: "List etcd3" key:/resourcequotas/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (16-Aug-2021 22:18:37.566) (total time: 46414ms):
	Trace[1665405246]: [46.414964917s] [46.414964917s] END
	I0816 22:19:23.981758       1 trace.go:205] Trace[1931229347]: "List" url:/api/v1/namespaces/kube-public/resourcequotas,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (16-Aug-2021 22:18:33.781) (total time: 50199ms):
	Trace[1931229347]: ---"Listing from storage done" 50199ms (22:19:00.981)
	Trace[1931229347]: [50.199781473s] [50.199781473s] END
	I0816 22:19:23.981865       1 trace.go:205] Trace[2017992765]: "List" url:/api/v1/namespaces/default/resourcequotas,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (16-Aug-2021 22:18:37.566) (total time: 46415ms):
	Trace[2017992765]: ---"Listing from storage done" 46415ms (22:19:00.981)
	Trace[2017992765]: [46.415211105s] [46.415211105s] END
	W0816 22:19:24.018072       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:19:24.208146       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:19:25.096285       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:19:25.943668       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:19:26.656310       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:19:26.918536       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:19:27.697434       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	
	* 
	* ==> kube-controller-manager [8a5626e3acb8d6e6512e7bc8e176d7f98cbab112b7311beb2017dc277e1b04c1] <==
	* I0816 22:14:39.641950       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-hr2q5"
	I0816 22:14:39.651611       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-7wcqt"
	I0816 22:14:39.724203       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0816 22:14:39.724328       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0816 22:14:39.731288       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0816 22:14:39.736318       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-hr2q5"
	I0816 22:14:44.033494       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	E0816 22:16:18.834007       1 node_lifecycle_controller.go:1107] Error updating node pause-20210816221349-6487: rpc error: code = Unavailable desc = transport is closing
	E0816 22:17:18.835292       1 node_lifecycle_controller.go:801] Failed while getting a Node to retry updating node health. Probably Node pause-20210816221349-6487 was deleted.
	E0816 22:17:18.835317       1 node_lifecycle_controller.go:806] Update health of Node '' from Controller error: the server was unable to return a response in the time allotted, but may still be processing the request (get nodes pause-20210816221349-6487). Skipping - no pods will be evicted.
	I0816 22:17:23.835603       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: 
	E0816 22:17:57.848763       1 node_lifecycle_controller.go:1107] Error updating node pause-20210816221349-6487: Timeout: request did not complete within requested timeout context deadline exceeded
	E0816 22:18:57.849960       1 node_lifecycle_controller.go:801] Failed while getting a Node to retry updating node health. Probably Node pause-20210816221349-6487 was deleted.
	E0816 22:18:57.849980       1 node_lifecycle_controller.go:806] Update health of Node '' from Controller error: the server was unable to return a response in the time allotted, but may still be processing the request (get nodes pause-20210816221349-6487). Skipping - no pods will be evicted.
	I0816 22:19:02.850266       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: 
	I0816 22:19:22.495672       1 event.go:291] "Event occurred" object="pause-20210816221349-6487" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node pause-20210816221349-6487 status is now: NodeNotReady"
	I0816 22:19:22.507118       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager-pause-20210816221349-6487" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0816 22:19:22.511253       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-pause-20210816221349-6487" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0816 22:19:22.515555       1 event.go:291] "Event occurred" object="kube-system/kindnet-gqxwk" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0816 22:19:22.518905       1 event.go:291] "Event occurred" object="kube-system/kube-proxy-njz9n" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0816 22:19:22.524007       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db-7wcqt" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0816 22:19:22.527327       1 event.go:291] "Event occurred" object="kube-system/storage-provisioner" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0816 22:19:22.536072       1 event.go:291] "Event occurred" object="kube-system/etcd-pause-20210816221349-6487" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0816 22:19:22.538805       1 node_lifecycle_controller.go:1164] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0816 22:19:22.538849       1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-pause-20210816221349-6487" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	* 
	* ==> kube-proxy [1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04] <==
	* I0816 22:14:40.215578       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0816 22:14:40.215638       1 server_others.go:140] Detected node IP 192.168.49.2
	W0816 22:14:40.215676       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0816 22:14:40.247009       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0816 22:14:40.247045       1 server_others.go:212] Using iptables Proxier.
	I0816 22:14:40.247058       1 server_others.go:219] creating dualStackProxier for iptables.
	W0816 22:14:40.247072       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0816 22:14:40.247479       1 server.go:643] Version: v1.21.3
	I0816 22:14:40.248182       1 config.go:315] Starting service config controller
	I0816 22:14:40.248255       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:14:40.248210       1 config.go:224] Starting endpoint slice config controller
	I0816 22:14:40.248339       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0816 22:14:40.250530       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0816 22:14:40.251756       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:14:40.348781       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0816 22:14:40.348804       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [a65e43c156f4fceb68516be869bf03e5d844bd50bf1c543882409696da7e44b7] <==
	* E0816 22:14:17.590366       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:14:17.693992       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:17.713305       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:17.720174       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:14:19.024758       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:14:19.034977       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:14:19.119270       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:14:19.472008       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:14:19.474924       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:14:19.492908       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:19.552344       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:19.701087       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:19.846725       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:14:20.081603       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:14:20.288654       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 22:14:20.300668       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:14:20.446407       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:20.757057       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:14:23.039661       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:14:23.059708       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:14:23.337106       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:23.637126       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:14:23.867448       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:14:24.219179       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0816 22:14:26.331536       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:13:51 UTC, end at Mon 2021-08-16 22:19:29 UTC. --
	Aug 16 22:19:12 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:12.507382    4924 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229097    4924 container_manager_linux.go:283] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:false CgroupRoot: CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229136    4924 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229148    4924 container_manager_linux.go:314] "Initializing Topology Manager" policy="none" scope="container"
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229156    4924 container_manager_linux.go:319] "Creating device plugin manager" devicePluginEnabled=true
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229315    4924 util_unix.go:103] "Using this format as endpoint is deprecated, please consider using full url format." deprecatedFormat="/var/run/crio/crio.sock" fullURLFormat="unix:///var/run/crio/crio.sock"
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229356    4924 remote_runtime.go:62] parsed scheme: ""
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229364    4924 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229398    4924 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/crio/crio.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229407    4924 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229471    4924 util_unix.go:103] "Using this format as endpoint is deprecated, please consider using full url format." deprecatedFormat="/var/run/crio/crio.sock" fullURLFormat="unix:///var/run/crio/crio.sock"
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229483    4924 remote_image.go:50] parsed scheme: ""
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229488    4924 remote_image.go:50] scheme "" not registered, fallback to default scheme
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229496    4924 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/crio/crio.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229502    4924 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229578    4924 kubelet.go:404] "Attempting to sync node with API server"
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229596    4924 kubelet.go:272] "Adding static pod path" path="/etc/kubernetes/manifests"
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229624    4924 kubelet.go:283] "Adding apiserver pod source"
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229640    4924 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.237613    4924 kuberuntime_manager.go:222] "Container runtime initialized" containerRuntime="cri-o" version="1.20.3" apiVersion="v1alpha1"
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: E0816 22:19:17.557870    4924 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.558460    4924 server.go:1190] "Started kubelet"
	Aug 16 22:19:17 pause-20210816221349-6487 systemd[1]: kubelet.service: Succeeded.
	Aug 16 22:19:17 pause-20210816221349-6487 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee] <==
	* rs/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 63 [sync.Cond.Wait, 3 minutes]:
	sync.runtime_notifyListWait(0xc000046850, 0x0)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc000046840)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc0003f2420, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc00014af00, 0x18e5530, 0xc000047cc0, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00024a0e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00024a0e0, 0x18b3d60, 0xc000290000, 0x1, 0xc00008eea0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00024a0e0, 0x3b9aca00, 0x0, 0x1, 0xc00008eea0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc00024a0e0, 0x3b9aca00, 0xc00008eea0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20210816221349-6487 -n pause-20210816221349-6487
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20210816221349-6487 -n pause-20210816221349-6487: exit status 2 (316.819666ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context pause-20210816221349-6487 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestPause/serial/PauseAgain]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context pause-20210816221349-6487 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context pause-20210816221349-6487 describe pod : exit status 1 (56.851597ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context pause-20210816221349-6487 describe pod : exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210816221349-6487
helpers_test.go:236: (dbg) docker inspect pause-20210816221349-6487:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d",
	        "Created": "2021-08-16T22:13:50.947309762Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 180330,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-16T22:13:51.51454931Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/hostname",
	        "HostsPath": "/var/lib/docker/containers/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/hosts",
	        "LogPath": "/var/lib/docker/containers/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d/859d383b66a464345f72a46b674a13660cb523e705cf9906dfc89aa7ac33ed5d-json.log",
	        "Name": "/pause-20210816221349-6487",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210816221349-6487:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210816221349-6487",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/699c14ab7b6f1c0dacf602df13a2fa2eb5acd9541bc64b4ed2e359bf1166606e-init/diff:/var/lib/docker/overlay2/26ccb05b45c39eb8fb9a222c35182ae0b2281818311893cc1c820d93f21711ae/diff:/var/lib/docker/overlay2/cd717b1332cc33945d2b9d777346faa34d6bdfa47023daf36a4373532eccf421/diff:/var/lib/docker/overlay2/d9b45e6cc99a52e8f81dfe312587e9103ceac6c138634c90ec0d647cd90dcdc6/diff:/var/lib/docker/overlay2/253a269adcaa7871d92fb351ceb4d32421104cd62f741ba1c63d8e3f2e19a0b5/diff:/var/lib/docker/overlay2/42adedc2a706910fdba591e62cbfab57dd03d85b882f6eef38c3e131a0c52bea/diff:/var/lib/docker/overlay2/77ed2008192a16e91da5c6b9ccc6e71cea338a7cb24d331dc695e5f74fb04573/diff:/var/lib/docker/overlay2/29dccf868a296ce0889c929159fd3ac0dcdc00ba6e2ef864b056335954ffab4d/diff:/var/lib/docker/overlay2/592818f0a9ab8c99c2438562d9c5a479083cb08f41ad30ddefa9b493a8098150/diff:/var/lib/docker/overlay2/05fa44df096fbdb60f935bef69e952369e10cd8f1adc570c170c1f08d09743f7/diff:/var/lib/docker/overlay2/65cf48
1c471f231a663edc6331272317a851b0fd3395b2d6be43899f9b5443f9/diff:/var/lib/docker/overlay2/86a49202a7a2abea1a7a57c52f2e19cf991e9319b11dcf7f4105892413e0096e/diff:/var/lib/docker/overlay2/6f1a2a1805e90af512eef163f484935d57b0a0a72a65b5775c38efb5fe937d13/diff:/var/lib/docker/overlay2/f514eb992d78367550d2ce1c020771f984eff047302d6af3dcd71172270c0087/diff:/var/lib/docker/overlay2/cdae383a0302a2ed835985028caad813d6e16c7caf0e146005b47a4a1e94369b/diff:/var/lib/docker/overlay2/1fe02987c4f18db439e026431c44d1b8b233da766c4dcd331a4c20fd91e88748/diff:/var/lib/docker/overlay2/28bb6b828668bb682f3523f9f71fbd429ef6180cd98455625ea014ef4e532b77/diff:/var/lib/docker/overlay2/2878ae01f9dccb219097c2fc5873be677fe8a70e2bd936d25f018483a3d6c1a3/diff:/var/lib/docker/overlay2/da86efa9bad2d8b46c88440b1c4864add391f7cc9ee07d4227dd37d76c9ffd85/diff:/var/lib/docker/overlay2/93bf17ee1f2be70c70aa38b3aea9daa3a62e189181d1aa56acc52f5cf90f7436/diff:/var/lib/docker/overlay2/e8db603cd7ef789cc310bf7782375ee624d80ac0bceb3a3075900e67540c152a/diff:/var/lib/d
ocker/overlay2/1ba1c50d72480f49b835b34db6c3b21bccdf6530a8201631099a7e287dbf94a6/diff:/var/lib/docker/overlay2/2eeb0aecf508bc4a797d240b330bfa5b54b84a085737a96dad9faf9ea129d4ce/diff:/var/lib/docker/overlay2/40ba0dc970df76cce520350594f86384e121b47a190106559752e8f7cf138540/diff:/var/lib/docker/overlay2/8b80e2d0e53fb136919fd9ca7d4d198a92c324a19aa1827fcd672a3aa49a0dcc/diff:/var/lib/docker/overlay2/bd1d10c9a8876242ad61e3841c439177c3ae3a1967e5b053729107676134c622/diff:/var/lib/docker/overlay2/1f65ee5880c0ce6e027f2e72118cb956fcdcba9bb0806d98de6532bcbdfac6dd/diff:/var/lib/docker/overlay2/8bde131df5f26fa8ad65d4ddcb9ddb84de567427ffeb4aee76487489e530ef77/diff:/var/lib/docker/overlay2/73a68cda04f65298e5bd2581f0f16a8de96cd42145968d5c39b0c8ae3228a423/diff:/var/lib/docker/overlay2/0acfb650c0dfd3b9df005fe48f305b155d61d21914271c258e95bf42636e9bf8/diff:/var/lib/docker/overlay2/33e44b42ef88ffe285cb67e240248a629167dd1c83362d51a67679cbd866c30b/diff:/var/lib/docker/overlay2/c71ddf9e0475e4a3987bf3d616723e9d055a79c4bb6d92cf34fc67be2d9
c5134/diff:/var/lib/docker/overlay2/faece61d8cf45bab42cd499befcdb8d4548229311b35600cced068a3ce186025/diff:/var/lib/docker/overlay2/5e45db6ab7620f8baf25ff5ff01c806cd64575cdc89e5bb2965a25800093d5f9/diff:/var/lib/docker/overlay2/7324fe47bf39776cd61539f09f63fbb6f5e17aedc985bcc48bc12cdaaff4c4e4/diff:/var/lib/docker/overlay2/98a50fa9fa33654b2ed4ce8f7118f04d7004260519abca0d403b2ab337e9c273/diff:/var/lib/docker/overlay2/0e38707c109f3f1c54c7190021d525f898a86b07f2e51c86dc3e6d916eff3ed7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/699c14ab7b6f1c0dacf602df13a2fa2eb5acd9541bc64b4ed2e359bf1166606e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/699c14ab7b6f1c0dacf602df13a2fa2eb5acd9541bc64b4ed2e359bf1166606e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/699c14ab7b6f1c0dacf602df13a2fa2eb5acd9541bc64b4ed2e359bf1166606e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20210816221349-6487",
	                "Source": "/var/lib/docker/volumes/pause-20210816221349-6487/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210816221349-6487",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210816221349-6487",
	                "name.minikube.sigs.k8s.io": "pause-20210816221349-6487",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0570bf3c5e1623f8d98964c6c2afad0bc376f97b81690d2719c8fc8bafd98f8c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32901"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32900"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32897"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32899"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0570bf3c5e16",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210816221349-6487": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "859d383b66a4"
	                    ],
	                    "NetworkID": "394b0b68014ce308c4cac60aecb16a91b93630211f90dc3e79f9040bcf6f53a0",
	                    "EndpointID": "66674d2a7391164faa47236ee3755487b5135a367100c27f1e2bc07dde97d027",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210816221349-6487 -n pause-20210816221349-6487
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210816221349-6487 -n pause-20210816221349-6487: exit status 2 (318.524892ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/PauseAgain FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/PauseAgain]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210816221349-6487 logs -n 25
helpers_test.go:253: TestPause/serial/PauseAgain logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                | kubernetes-upgrade-20210816221144-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:13:46 UTC | Mon, 16 Aug 2021 22:13:49 UTC |
	|         | kubernetes-upgrade-20210816221144-6487            |                                        |         |         |                               |                               |
	| start   | -p                                                | missing-upgrade-20210816221142-6487    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:13:44 UTC | Mon, 16 Aug 2021 22:14:50 UTC |
	|         | missing-upgrade-20210816221142-6487               |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                        |         |         |                               |                               |
	|         | -v=1 --driver=docker                              |                                        |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                        |         |         |                               |                               |
	| delete  | -p                                                | missing-upgrade-20210816221142-6487    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:14:50 UTC | Mon, 16 Aug 2021 22:14:53 UTC |
	|         | missing-upgrade-20210816221142-6487               |                                        |         |         |                               |                               |
	| start   | -p                                                | force-systemd-env-20210816221453-6487  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:14:53 UTC | Mon, 16 Aug 2021 22:15:24 UTC |
	|         | force-systemd-env-20210816221453-6487             |                                        |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr                   |                                        |         |         |                               |                               |
	|         | -v=5 --driver=docker                              |                                        |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                        |         |         |                               |                               |
	| delete  | -p                                                | stopped-upgrade-20210816221221-6487    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:22 UTC | Mon, 16 Aug 2021 22:15:25 UTC |
	|         | stopped-upgrade-20210816221221-6487               |                                        |         |         |                               |                               |
	| delete  | -p                                                | force-systemd-env-20210816221453-6487  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:24 UTC | Mon, 16 Aug 2021 22:15:27 UTC |
	|         | force-systemd-env-20210816221453-6487             |                                        |         |         |                               |                               |
	| delete  | -p kubenet-20210816221527-6487                    | kubenet-20210816221527-6487            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:27 UTC | Mon, 16 Aug 2021 22:15:27 UTC |
	| delete  | -p flannel-20210816221527-6487                    | flannel-20210816221527-6487            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:27 UTC | Mon, 16 Aug 2021 22:15:28 UTC |
	| delete  | -p false-20210816221528-6487                      | false-20210816221528-6487              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:28 UTC | Mon, 16 Aug 2021 22:15:28 UTC |
	| start   | -p pause-20210816221349-6487                      | pause-20210816221349-6487              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:13:49 UTC | Mon, 16 Aug 2021 22:15:32 UTC |
	|         | --memory=2048                                     |                                        |         |         |                               |                               |
	|         | --install-addons=false                            |                                        |         |         |                               |                               |
	|         | --wait=all --driver=docker                        |                                        |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                        |         |         |                               |                               |
	| start   | -p pause-20210816221349-6487                      | pause-20210816221349-6487              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:32 UTC | Mon, 16 Aug 2021 22:15:38 UTC |
	|         | --alsologtostderr                                 |                                        |         |         |                               |                               |
	|         | -v=1 --driver=docker                              |                                        |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                        |         |         |                               |                               |
	| delete  | -p                                                | running-upgrade-20210816221326-6487    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:52 UTC | Mon, 16 Aug 2021 22:15:55 UTC |
	|         | running-upgrade-20210816221326-6487               |                                        |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210816221528-6487    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:28 UTC | Mon, 16 Aug 2021 22:17:12 UTC |
	|         | old-k8s-version-20210816221528-6487               |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                        |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                        |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                        |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                        |         |         |                               |                               |
	|         | --keep-context=false                              |                                        |         |         |                               |                               |
	|         | --driver=docker                                   |                                        |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                        |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210816221528-6487    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:17:22 UTC | Mon, 16 Aug 2021 22:17:22 UTC |
	|         | old-k8s-version-20210816221528-6487               |                                        |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                        |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                        |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210816221528-6487    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:17:22 UTC | Mon, 16 Aug 2021 22:17:43 UTC |
	|         | old-k8s-version-20210816221528-6487               |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                        |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210816221528-6487    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:17:43 UTC | Mon, 16 Aug 2021 22:17:43 UTC |
	|         | old-k8s-version-20210816221528-6487               |                                        |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                        |         |         |                               |                               |
	| start   | -p no-preload-20210816221555-6487                 | no-preload-20210816221555-6487         | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:55 UTC | Mon, 16 Aug 2021 22:17:51 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                        |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                        |         |         |                               |                               |
	|         | --driver=docker                                   |                                        |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                        |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210816221555-6487         | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:17:59 UTC | Mon, 16 Aug 2021 22:17:59 UTC |
	|         | no-preload-20210816221555-6487                    |                                        |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                        |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                        |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210816221555-6487         | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:17:59 UTC | Mon, 16 Aug 2021 22:18:20 UTC |
	|         | no-preload-20210816221555-6487                    |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                        |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210816221555-6487         | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:18:20 UTC | Mon, 16 Aug 2021 22:18:20 UTC |
	|         | no-preload-20210816221555-6487                    |                                        |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                        |         |         |                               |                               |
	| start   | -p                                                | cert-options-20210816221525-6487       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:25 UTC | Mon, 16 Aug 2021 22:19:09 UTC |
	|         | cert-options-20210816221525-6487                  |                                        |         |         |                               |                               |
	|         | --memory=2048                                     |                                        |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                         |                                        |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15                     |                                        |         |         |                               |                               |
	|         | --apiserver-names=localhost                       |                                        |         |         |                               |                               |
	|         | --apiserver-names=www.google.com                  |                                        |         |         |                               |                               |
	|         | --apiserver-port=8555                             |                                        |         |         |                               |                               |
	|         | --driver=docker                                   |                                        |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                        |         |         |                               |                               |
	| -p      | cert-options-20210816221525-6487                  | cert-options-20210816221525-6487       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:09 UTC | Mon, 16 Aug 2021 22:19:10 UTC |
	|         | ssh openssl x509 -text -noout -in                 |                                        |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt             |                                        |         |         |                               |                               |
	| unpause | -p pause-20210816221349-6487                      | pause-20210816221349-6487              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:11 UTC | Mon, 16 Aug 2021 22:19:12 UTC |
	|         | --alsologtostderr -v=5                            |                                        |         |         |                               |                               |
	| delete  | -p                                                | cert-options-20210816221525-6487       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:10 UTC | Mon, 16 Aug 2021 22:19:13 UTC |
	|         | cert-options-20210816221525-6487                  |                                        |         |         |                               |                               |
	| -p      | pause-20210816221349-6487 logs                    | pause-20210816221349-6487              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:29 UTC | Mon, 16 Aug 2021 22:19:29 UTC |
	|         | -n 25                                             |                                        |         |         |                               |                               |
	|---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:19:13
	Running on machine: debian-jenkins-agent-13
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:19:13.301704  226196 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:19:13.301796  226196 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:19:13.301806  226196 out.go:311] Setting ErrFile to fd 2...
	I0816 22:19:13.301810  226196 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:19:13.301930  226196 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:19:13.302239  226196 out.go:305] Setting JSON to false
	I0816 22:19:13.339656  226196 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-13","uptime":3520,"bootTime":1629148833,"procs":338,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:19:13.339756  226196 start.go:121] virtualization: kvm guest
	I0816 22:19:13.342422  226196 out.go:177] * [embed-certs-20210816221913-6487] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:19:13.343968  226196 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:19:13.342577  226196 notify.go:169] Checking for updates...
	I0816 22:19:13.345536  226196 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:19:13.346956  226196 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:19:13.348437  226196 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:19:13.348908  226196 config.go:177] Loaded profile config "no-preload-20210816221555-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0816 22:19:13.349030  226196 config.go:177] Loaded profile config "old-k8s-version-20210816221528-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0816 22:19:13.349129  226196 config.go:177] Loaded profile config "pause-20210816221349-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 22:19:13.349176  226196 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:19:13.398668  226196 docker.go:132] docker version: linux-19.03.15
	I0816 22:19:13.398777  226196 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:19:13.479313  226196 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-16 22:19:13.43527811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 22:19:13.479459  226196 docker.go:244] overlay module found
	I0816 22:19:13.481616  226196 out.go:177] * Using the docker driver based on user configuration
	I0816 22:19:13.481642  226196 start.go:278] selected driver: docker
	I0816 22:19:13.481648  226196 start.go:751] validating driver "docker" against <nil>
	I0816 22:19:13.481666  226196 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0816 22:19:13.481725  226196 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:19:13.481746  226196 out.go:242] ! Your cgroup does not allow setting memory.
	I0816 22:19:13.485128  226196 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:19:13.485959  226196 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:19:13.569882  226196 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-16 22:19:13.522702537 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 22:19:13.569990  226196 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0816 22:19:13.570128  226196 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 22:19:13.570148  226196 cni.go:93] Creating CNI manager for ""
	I0816 22:19:13.570160  226196 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:19:13.570168  226196 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 22:19:13.570178  226196 start_flags.go:277] config:
	{Name:embed-certs-20210816221913-6487 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210816221913-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:19:13.572227  226196 out.go:177] * Starting control plane node embed-certs-20210816221913-6487 in cluster embed-certs-20210816221913-6487
	I0816 22:19:13.572272  226196 cache.go:117] Beginning downloading kic base image for docker with crio
	I0816 22:19:09.559512  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:12.058112  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:13.573748  226196 out.go:177] * Pulling base image ...
	I0816 22:19:13.573775  226196 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0816 22:19:13.573805  226196 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0816 22:19:13.573824  226196 cache.go:56] Caching tarball of preloaded images
	I0816 22:19:13.573877  226196 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0816 22:19:13.574001  226196 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 22:19:13.574019  226196 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0816 22:19:13.574131  226196 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/config.json ...
	I0816 22:19:13.574156  226196 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/config.json: {Name:mk837e8194ab2b2a82bccf8a9a9a0d624adb0134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:19:13.668793  226196 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0816 22:19:13.668825  226196 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0816 22:19:13.668841  226196 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:19:13.668873  226196 start.go:313] acquiring machines lock for embed-certs-20210816221913-6487: {Name:mkaa6840e29b8ce519208ca05a6868b89ed678ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:19:13.668997  226196 start.go:317] acquired machines lock for "embed-certs-20210816221913-6487" in 106.298µs
	I0816 22:19:13.669022  226196 start.go:89] Provisioning new machine with config: &{Name:embed-certs-20210816221913-6487 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210816221913-6487 Namespace:default APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 22:19:13.669091  226196 start.go:126] createHost starting for "" (driver="docker")
	I0816 22:19:11.060582  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:13.560175  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:15.562233  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:13.671499  226196 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0816 22:19:13.671729  226196 start.go:160] libmachine.API.Create for "embed-certs-20210816221913-6487" (driver="docker")
	I0816 22:19:13.671758  226196 client.go:168] LocalClient.Create starting
	I0816 22:19:13.671842  226196 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem
	I0816 22:19:13.671872  226196 main.go:130] libmachine: Decoding PEM data...
	I0816 22:19:13.671889  226196 main.go:130] libmachine: Parsing certificate...
	I0816 22:19:13.672065  226196 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem
	I0816 22:19:13.672099  226196 main.go:130] libmachine: Decoding PEM data...
	I0816 22:19:13.672117  226196 main.go:130] libmachine: Parsing certificate...
	I0816 22:19:13.672510  226196 cli_runner.go:115] Run: docker network inspect embed-certs-20210816221913-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0816 22:19:13.711967  226196 cli_runner.go:162] docker network inspect embed-certs-20210816221913-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0816 22:19:13.712036  226196 network_create.go:255] running [docker network inspect embed-certs-20210816221913-6487] to gather additional debugging logs...
	I0816 22:19:13.712052  226196 cli_runner.go:115] Run: docker network inspect embed-certs-20210816221913-6487
	W0816 22:19:13.749576  226196 cli_runner.go:162] docker network inspect embed-certs-20210816221913-6487 returned with exit code 1
	I0816 22:19:13.749612  226196 network_create.go:258] error running [docker network inspect embed-certs-20210816221913-6487]: docker network inspect embed-certs-20210816221913-6487: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20210816221913-6487
	I0816 22:19:13.749623  226196 network_create.go:260] output of [docker network inspect embed-certs-20210816221913-6487]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20210816221913-6487
	
	** /stderr **
	I0816 22:19:13.749661  226196 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 22:19:13.788235  226196 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-394b0b68014c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:c7:c7:80:dd}}
	I0816 22:19:13.788882  226196 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-4ed2783b447d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:d5:d5:90:49}}
	I0816 22:19:13.790033  226196 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-98b3ee991257 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:b1:3b:84:51}}
	I0816 22:19:13.791520  226196 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.76.0:0xc000638098] misses:0}
	I0816 22:19:13.791590  226196 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0816 22:19:13.791605  226196 network_create.go:106] attempt to create docker network embed-certs-20210816221913-6487 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0816 22:19:13.791647  226196 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20210816221913-6487
	I0816 22:19:13.861245  226196 network_create.go:90] docker network embed-certs-20210816221913-6487 192.168.76.0/24 created
	I0816 22:19:13.861276  226196 kic.go:106] calculated static IP "192.168.76.2" for the "embed-certs-20210816221913-6487" container
	I0816 22:19:13.861325  226196 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0816 22:19:13.900782  226196 cli_runner.go:115] Run: docker volume create embed-certs-20210816221913-6487 --label name.minikube.sigs.k8s.io=embed-certs-20210816221913-6487 --label created_by.minikube.sigs.k8s.io=true
	I0816 22:19:13.940689  226196 oci.go:102] Successfully created a docker volume embed-certs-20210816221913-6487
	I0816 22:19:13.940770  226196 cli_runner.go:115] Run: docker run --rm --name embed-certs-20210816221913-6487-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20210816221913-6487 --entrypoint /usr/bin/test -v embed-certs-20210816221913-6487:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0816 22:19:14.711637  226196 oci.go:106] Successfully prepared a docker volume embed-certs-20210816221913-6487
	W0816 22:19:14.711693  226196 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0816 22:19:14.711702  226196 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0816 22:19:14.711707  226196 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0816 22:19:14.711735  226196 kic.go:179] Starting extracting preloaded images to volume ...
	I0816 22:19:14.711762  226196 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0816 22:19:14.711822  226196 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20210816221913-6487:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0816 22:19:14.795019  226196 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-20210816221913-6487 --name embed-certs-20210816221913-6487 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20210816221913-6487 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-20210816221913-6487 --network embed-certs-20210816221913-6487 --ip 192.168.76.2 --volume embed-certs-20210816221913-6487:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0816 22:19:15.324335  226196 cli_runner.go:115] Run: docker container inspect embed-certs-20210816221913-6487 --format={{.State.Running}}
	I0816 22:19:15.374298  226196 cli_runner.go:115] Run: docker container inspect embed-certs-20210816221913-6487 --format={{.State.Status}}
	I0816 22:19:15.420245  226196 cli_runner.go:115] Run: docker exec embed-certs-20210816221913-6487 stat /var/lib/dpkg/alternatives/iptables
	I0816 22:19:15.562793  226196 oci.go:278] the created container "embed-certs-20210816221913-6487" has a running status.
	I0816 22:19:15.562828  226196 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa...
	I0816 22:19:15.688176  226196 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0816 22:19:16.059432  226196 cli_runner.go:115] Run: docker container inspect embed-certs-20210816221913-6487 --format={{.State.Status}}
	I0816 22:19:16.100847  226196 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0816 22:19:16.100866  226196 kic_runner.go:115] Args: [docker exec --privileged embed-certs-20210816221913-6487 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0816 22:19:14.058243  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:16.058503  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:18.557254  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:18.372387  226196 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20210816221913-6487:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (3.660521738s)
	I0816 22:19:18.372418  226196 kic.go:188] duration metric: took 3.660680 seconds to extract preloaded images to volume
	I0816 22:19:18.372480  226196 cli_runner.go:115] Run: docker container inspect embed-certs-20210816221913-6487 --format={{.State.Status}}
	I0816 22:19:18.411770  226196 machine.go:88] provisioning docker machine ...
	I0816 22:19:18.411806  226196 ubuntu.go:169] provisioning hostname "embed-certs-20210816221913-6487"
	I0816 22:19:18.411860  226196 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:19:18.450157  226196 main.go:130] libmachine: Using SSH client type: native
	I0816 22:19:18.450335  226196 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32944 <nil> <nil>}
	I0816 22:19:18.450351  226196 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20210816221913-6487 && echo "embed-certs-20210816221913-6487" | sudo tee /etc/hostname
	I0816 22:19:18.603404  226196 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20210816221913-6487
	
	I0816 22:19:18.603477  226196 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:19:18.643743  226196 main.go:130] libmachine: Using SSH client type: native
	I0816 22:19:18.643885  226196 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32944 <nil> <nil>}
	I0816 22:19:18.643922  226196 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20210816221913-6487' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20210816221913-6487/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20210816221913-6487' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:19:18.767211  226196 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:19:18.767256  226196 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem ServerCertRemotePath:/e
tc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:19:18.767319  226196 ubuntu.go:177] setting up certificates
	I0816 22:19:18.767332  226196 provision.go:83] configureAuth start
	I0816 22:19:18.767396  226196 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210816221913-6487
	I0816 22:19:18.807035  226196 provision.go:138] copyHostCerts
	I0816 22:19:18.807086  226196 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:19:18.807096  226196 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:19:18.807144  226196 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:19:18.807216  226196 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:19:18.807226  226196 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:19:18.807241  226196 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0816 22:19:18.807289  226196 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:19:18.807297  226196 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:19:18.807312  226196 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:19:18.807352  226196 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20210816221913-6487 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20210816221913-6487]
	I0816 22:19:18.895657  226196 provision.go:172] copyRemoteCerts
	I0816 22:19:18.895709  226196 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:19:18.895743  226196 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:19:18.936723  226196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32944 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:19:19.026683  226196 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:19:19.042459  226196 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0816 22:19:19.058500  226196 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 22:19:19.073859  226196 provision.go:86] duration metric: configureAuth took 306.513578ms
	I0816 22:19:19.073881  226196 ubuntu.go:193] setting minikube options for container-runtime
	I0816 22:19:19.074033  226196 config.go:177] Loaded profile config "embed-certs-20210816221913-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 22:19:19.074189  226196 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:19:19.114560  226196 main.go:130] libmachine: Using SSH client type: native
	I0816 22:19:19.114725  226196 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32944 <nil> <nil>}
	I0816 22:19:19.114746  226196 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 22:19:19.468077  226196 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 22:19:19.468107  226196 machine.go:91] provisioned docker machine in 1.056316804s
	I0816 22:19:19.468118  226196 client.go:171] LocalClient.Create took 5.796352067s
	I0816 22:19:19.468135  226196 start.go:168] duration metric: libmachine.API.Create for "embed-certs-20210816221913-6487" took 5.796406297s
	I0816 22:19:19.468146  226196 start.go:267] post-start starting for "embed-certs-20210816221913-6487" (driver="docker")
	I0816 22:19:19.468156  226196 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:19:19.468228  226196 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:19:19.468285  226196 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:19:19.509141  226196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32944 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:19:19.599058  226196 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:19:19.601697  226196 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0816 22:19:19.601720  226196 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0816 22:19:19.601729  226196 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0816 22:19:19.601736  226196 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0816 22:19:19.601747  226196 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:19:19.601796  226196 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:19:19.601922  226196 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem -> 64872.pem in /etc/ssl/certs
	I0816 22:19:19.602043  226196 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:19:19.608252  226196 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:19:19.623893  226196 start.go:270] post-start completed in 155.736025ms
	I0816 22:19:19.624252  226196 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210816221913-6487
	I0816 22:19:19.664042  226196 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/config.json ...
	I0816 22:19:19.664257  226196 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 22:19:19.664299  226196 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:19:19.704852  226196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32944 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:19:19.791957  226196 start.go:129] duration metric: createHost completed in 6.122855076s
	I0816 22:19:19.791983  226196 start.go:80] releasing machines lock for "embed-certs-20210816221913-6487", held for 6.122975943s
	I0816 22:19:19.792053  226196 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210816221913-6487
	I0816 22:19:19.831056  226196 ssh_runner.go:149] Run: systemctl --version
	I0816 22:19:19.831098  226196 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:19:19.831117  226196 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:19:19.831149  226196 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:19:19.878381  226196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32944 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:19:19.884046  226196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32944 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:19:19.967872  226196 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0816 22:19:20.006416  226196 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0816 22:19:20.015382  226196 docker.go:153] disabling docker service ...
	I0816 22:19:20.015427  226196 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:19:20.024573  226196 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:19:20.033758  226196 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:19:20.100113  226196 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:19:20.165813  226196 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:19:20.174379  226196 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:19:20.186280  226196 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0816 22:19:20.193811  226196 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0816 22:19:20.193843  226196 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0816 22:19:20.202278  226196 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:19:20.207967  226196 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:19:20.208014  226196 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:19:20.214588  226196 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:19:20.220334  226196 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:19:20.280776  226196 ssh_runner.go:149] Run: sudo systemctl start crio
	I0816 22:19:20.289616  226196 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 22:19:20.289667  226196 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:19:20.292630  226196 start.go:413] Will wait 60s for crictl version
	I0816 22:19:20.292670  226196 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:19:20.319206  226196 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0816 22:19:20.319272  226196 ssh_runner.go:149] Run: crio --version
	I0816 22:19:20.378981  226196 ssh_runner.go:149] Run: crio --version
	I0816 22:19:18.060608  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:20.061387  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:20.442981  226196 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	I0816 22:19:20.443052  226196 cli_runner.go:115] Run: docker network inspect embed-certs-20210816221913-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 22:19:20.481608  226196 ssh_runner.go:149] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0816 22:19:20.484825  226196 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:19:20.493429  226196 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0816 22:19:20.493496  226196 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:19:20.538638  226196 crio.go:424] all images are preloaded for cri-o runtime.
	I0816 22:19:20.538659  226196 crio.go:333] Images already preloaded, skipping extraction
	I0816 22:19:20.538699  226196 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:19:20.562178  226196 crio.go:424] all images are preloaded for cri-o runtime.
	I0816 22:19:20.562197  226196 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:19:20.562255  226196 ssh_runner.go:149] Run: crio config
	I0816 22:19:20.626481  226196 cni.go:93] Creating CNI manager for ""
	I0816 22:19:20.626501  226196 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:19:20.626511  226196 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0816 22:19:20.626525  226196 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20210816221913-6487 NodeName:embed-certs-20210816221913-6487 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/va
r/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:19:20.626672  226196 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "embed-certs-20210816221913-6487"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:19:20.626779  226196 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=embed-certs-20210816221913-6487 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210816221913-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0816 22:19:20.626833  226196 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0816 22:19:20.633579  226196 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:19:20.633631  226196 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:19:20.640034  226196 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (562 bytes)
	I0816 22:19:20.651613  226196 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 22:19:20.663703  226196 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2072 bytes)
	I0816 22:19:20.675324  226196 ssh_runner.go:149] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0816 22:19:20.678037  226196 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:19:20.686319  226196 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487 for IP: 192.168.76.2
	I0816 22:19:20.686361  226196 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:19:20.686382  226196 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:19:20.686460  226196 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/client.key
	I0816 22:19:20.686472  226196 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/client.crt with IP's: []
	I0816 22:19:20.852920  226196 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/client.crt ...
	I0816 22:19:20.852950  226196 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/client.crt: {Name:mkd9c198d4c7f9c8c784093a4ebe740a0ac82674 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:19:20.853172  226196 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/client.key ...
	I0816 22:19:20.853197  226196 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/client.key: {Name:mk89c99383a818365234fa5c6fc15aee0ea06aee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:19:20.853314  226196 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/apiserver.key.31bdca25
	I0816 22:19:20.853325  226196 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0816 22:19:21.046284  226196 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/apiserver.crt.31bdca25 ...
	I0816 22:19:21.046315  226196 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/apiserver.crt.31bdca25: {Name:mka4656174b58e1d143b6f452fb49cb942928021 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:19:21.046486  226196 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/apiserver.key.31bdca25 ...
	I0816 22:19:21.046498  226196 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/apiserver.key.31bdca25: {Name:mk8533245acff5c8fbef6dedc3df9d3eef9bb6ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:19:21.046569  226196 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/apiserver.crt
	I0816 22:19:21.046624  226196 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/apiserver.key
	I0816 22:19:21.046673  226196 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/proxy-client.key
	I0816 22:19:21.046681  226196 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/proxy-client.crt with IP's: []
	I0816 22:19:21.283110  226196 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/proxy-client.crt ...
	I0816 22:19:21.283143  226196 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/proxy-client.crt: {Name:mk02cebfd2674b34a37859dd1d216f5938633a31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:19:21.283316  226196 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/proxy-client.key ...
	I0816 22:19:21.283332  226196 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/proxy-client.key: {Name:mkfe96cd1b7b66e5ace97031a88d9646a295e963 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:19:21.283486  226196 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem (1338 bytes)
	W0816 22:19:21.283521  226196 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487_empty.pem, impossibly tiny 0 bytes
	I0816 22:19:21.283532  226196 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 22:19:21.283557  226196 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:19:21.283590  226196 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:19:21.283615  226196 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0816 22:19:21.283657  226196 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:19:21.284589  226196 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:19:21.313244  226196 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 22:19:21.332732  226196 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:19:21.351082  226196 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 22:19:21.368388  226196 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:19:21.384511  226196 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 22:19:21.400058  226196 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:19:21.414989  226196 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 22:19:21.430650  226196 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:19:21.446235  226196 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem --> /usr/share/ca-certificates/6487.pem (1338 bytes)
	I0816 22:19:21.461586  226196 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /usr/share/ca-certificates/64872.pem (1708 bytes)
	I0816 22:19:21.476849  226196 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:19:21.487759  226196 ssh_runner.go:149] Run: openssl version
	I0816 22:19:21.492008  226196 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:19:21.498387  226196 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:19:21.501108  226196 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:19:21.501154  226196 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:19:21.505926  226196 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:19:21.512772  226196 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6487.pem && ln -fs /usr/share/ca-certificates/6487.pem /etc/ssl/certs/6487.pem"
	I0816 22:19:21.519185  226196 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6487.pem
	I0816 22:19:21.521884  226196 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:50 /usr/share/ca-certificates/6487.pem
	I0816 22:19:21.521918  226196 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6487.pem
	I0816 22:19:21.526199  226196 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6487.pem /etc/ssl/certs/51391683.0"
	I0816 22:19:21.532541  226196 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/64872.pem && ln -fs /usr/share/ca-certificates/64872.pem /etc/ssl/certs/64872.pem"
	I0816 22:19:21.538972  226196 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/64872.pem
	I0816 22:19:21.541746  226196 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:50 /usr/share/ca-certificates/64872.pem
	I0816 22:19:21.541777  226196 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/64872.pem
	I0816 22:19:21.545930  226196 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/64872.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:19:21.552406  226196 kubeadm.go:390] StartCluster: {Name:embed-certs-20210816221913-6487 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210816221913-6487 Namespace:default APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:19:21.552484  226196 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 22:19:21.552533  226196 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:19:21.576024  226196 cri.go:76] found id: ""
	I0816 22:19:21.576083  226196 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:19:21.582400  226196 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:19:21.588531  226196 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0816 22:19:21.588572  226196 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:19:21.594576  226196 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:19:21.594618  226196 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0816 22:19:21.876995  226196 out.go:204]   - Generating certificates and keys ...
	I0816 22:19:20.561242  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:23.058076  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:22.080108  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:24.560472  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:24.328246  226196 out.go:204]   - Booting up control plane ...
	I0816 22:19:25.556694  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:27.557188  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:27.059988  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:19:29.061355  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Mon 2021-08-16 22:13:51 UTC, end at Mon 2021-08-16 22:19:31 UTC. --
	Aug 16 22:15:33 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:33.997181209Z" level=info msg="Node configuration value for systemd CollectMode is true"
	Aug 16 22:15:33 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:33.998911132Z" level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.001615431Z" level=info msg="Conmon does support the --sync option"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.001679089Z" level=info msg="No seccomp profile specified, using the internal default"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.001686289Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.006618470Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.009192800Z" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.011666290Z" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.023071034Z" level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.023093501Z" level=warning msg="Default CNI network name kindnet is unchangeable"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.335777529Z" level=info msg="Got pod network &{Name:coredns-558bd4d5db-7wcqt Namespace:kube-system ID:ace8d49de7551bd58fe5eaea049c8354bcd32c69f45c71ad2cb198b464d3fcf7 NetNS:/var/run/netns/72967a55-409d-4e22-a50b-fe735e218d4f Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Aug 16 22:15:34 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:34.336029066Z" level=info msg="About to check CNI network kindnet (type=ptp)"
	Aug 16 22:15:34 pause-20210816221349-6487 systemd[1]: Started Container Runtime Interface for OCI (CRI-O).
	Aug 16 22:15:37 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:37.869390202Z" level=info msg="Running pod sandbox: kube-system/storage-provisioner/POD" id=f027cbdd-a110-4c73-bd17-ec2c625617e8 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.008166152Z" level=info msg="Ran pod sandbox 4e41a3650a65fa3e6ea88a533598623ec9c31b939feb4c7936a8d6c0e094b9f1 with infra container: kube-system/storage-provisioner/POD" id=f027cbdd-a110-4c73-bd17-ec2c625617e8 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.009539171Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3ba0b87b-369b-437f-8093-e60bb62a6e5f name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.010209695Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=3ba0b87b-369b-437f-8093-e60bb62a6e5f name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.010864154Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6a8cdcd8-6743-458d-a5bd-6721cf6293d6 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.011418773Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=6a8cdcd8-6743-458d-a5bd-6721cf6293d6 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.012207175Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b3c53444-4fd4-4816-941c-ae75f44b9aab name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.023341263Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/6f094131e65ab0dac08bd38816173827064bbf3fa41e0cb5285e07440d34effc/merged/etc/passwd: no such file or directory"
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.023470306Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/6f094131e65ab0dac08bd38816173827064bbf3fa41e0cb5285e07440d34effc/merged/etc/group: no such file or directory"
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.144183330Z" level=info msg="Created container 2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee: kube-system/storage-provisioner/storage-provisioner" id=b3c53444-4fd4-4816-941c-ae75f44b9aab name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.144745336Z" level=info msg="Starting container: 2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee" id=b7210b75-09f3-4fd5-8d69-e9dcc6142d7a name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:15:38 pause-20210816221349-6487 crio[2941]: time="2021-08-16 22:15:38.155275298Z" level=info msg="Started container 2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee: kube-system/storage-provisioner/storage-provisioner" id=b7210b75-09f3-4fd5-8d69-e9dcc6142d7a name=/runtime.v1alpha2.RuntimeService/StartContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	2bd1364ac865c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 minutes ago       Exited              storage-provisioner       0                   4e41a3650a65f
	a3847cf5a7a0a       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   4 minutes ago       Running             coredns                   0                   ace8d49de7551
	ba2e9dd72df01       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb   4 minutes ago       Running             kindnet-cni               0                   d5d9684c84cea
	1b3d3880e345b       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   4 minutes ago       Running             kube-proxy                0                   4a42049e95348
	1b4dd675dc4bc       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   5 minutes ago       Running             etcd                      0                   c159e3fd639d7
	8a5626e3acb8d       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   5 minutes ago       Running             kube-controller-manager   0                   5f516d619d78c
	e812d329ba697       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   5 minutes ago       Running             kube-apiserver            0                   a055e0d3dc6de
	a65e43c156f4f       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   5 minutes ago       Running             kube-scheduler            0                   97b975cd86e3b
	
	* 
	* ==> coredns [a3847cf5a7a0a50db95228eb59c9b2f0fd2f919f5d3ca5657533960a22d80eeb] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20210816221349-6487
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-20210816221349-6487
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
	                    minikube.k8s.io/name=pause-20210816221349-6487
	                    minikube.k8s.io/updated_at=2021_08_16T22_14_26_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Aug 2021 22:14:16 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20210816221349-6487
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Aug 2021 22:15:32 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 16 Aug 2021 22:14:39 +0000   Mon, 16 Aug 2021 22:19:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 16 Aug 2021 22:14:39 +0000   Mon, 16 Aug 2021 22:19:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 16 Aug 2021 22:14:39 +0000   Mon, 16 Aug 2021 22:19:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 16 Aug 2021 22:14:39 +0000   Mon, 16 Aug 2021 22:19:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    pause-20210816221349-6487
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                43a26ced-e56b-4198-bb41-8e54e8862df5
	  Boot ID:                    fb7b5690-fedc-46af-96ea-1f6e59faa09d
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-7wcqt                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m52s
	  kube-system                 etcd-pause-20210816221349-6487                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         5m
	  kube-system                 kindnet-gqxwk                                        100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m52s
	  kube-system                 kube-apiserver-pause-20210816221349-6487             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  kube-system                 kube-controller-manager-pause-20210816221349-6487    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  kube-system                 kube-proxy-njz9n                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  kube-system                 kube-scheduler-pause-20210816221349-6487             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  5m24s (x5 over 5m24s)  kubelet     Node pause-20210816221349-6487 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m24s (x4 over 5m24s)  kubelet     Node pause-20210816221349-6487 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m24s (x4 over 5m24s)  kubelet     Node pause-20210816221349-6487 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m                     kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m                     kubelet     Node pause-20210816221349-6487 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m                     kubelet     Node pause-20210816221349-6487 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m                     kubelet     Node pause-20210816221349-6487 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m52s                  kubelet     Node pause-20210816221349-6487 status is now: NodeReady
	  Normal  Starting                 4m51s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4ed2783b447d
	[  +0.000003] ll header: 00000000: 02 42 d5 d5 90 49 02 42 c0 a8 3a 02 08 00        .B...I.B..:...
	[  +0.003943] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4ed2783b447d
	[  +0.000002] ll header: 00000000: 02 42 d5 d5 90 49 02 42 c0 a8 3a 02 08 00        .B...I.B..:...
	[  +0.000057] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4ed2783b447d
	[  +0.000002] ll header: 00000000: 02 42 d5 d5 90 49 02 42 c0 a8 3a 02 08 00        .B...I.B..:...
	[  +8.187375] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4ed2783b447d
	[  +0.000002] ll header: 00000000: 02 42 d5 d5 90 49 02 42 c0 a8 3a 02 08 00        .B...I.B..:...
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4ed2783b447d
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4ed2783b447d
	[  +0.000001] ll header: 00000000: 02 42 d5 d5 90 49 02 42 c0 a8 3a 02 08 00        .B...I.B..:...
	[  +0.000002] ll header: 00000000: 02 42 d5 d5 90 49 02 42 c0 a8 3a 02 08 00        .B...I.B..:...
	[  +1.048379] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev vethb975c587
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff ae e9 bc 5d c0 ce 08 06        .........]....
	[  +0.000537] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth6258e918
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1a 52 65 b5 f7 2a 08 06        .......Re..*..
	[  +4.928013] cgroup: cgroup2: unknown option "nsdelegate"
	[ +20.434413] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth77a7f862
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff a6 7f 02 c1 0b 7c 08 06        ...........|..
	[  +0.312009] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev vethf2148e09
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 62 56 15 39 44 18 08 06        ......bV.9D...
	[  +0.299903] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev veth00213cf6
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff da 76 97 cb ee 26 08 06        .......v...&..
	[  +2.187341] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug16 22:19] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [1b4dd675dc4bc712d775d9439c8d45ced8809e269a2c6c55a3a6d3160d5415fe] <==
	* 2021-08-16 22:19:11.927827 I | embed: rejected connection from "127.0.0.1:34508" (error "write tcp 127.0.0.1:2379->127.0.0.1:34508: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.928582 I | embed: rejected connection from "127.0.0.1:34474" (error "write tcp 127.0.0.1:2379->127.0.0.1:34474: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.929510 I | embed: rejected connection from "127.0.0.1:35136" (error "write tcp 127.0.0.1:2379->127.0.0.1:35136: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.929565 I | embed: rejected connection from "127.0.0.1:35132" (error "write tcp 127.0.0.1:2379->127.0.0.1:35132: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.929787 I | embed: rejected connection from "127.0.0.1:34450" (error "write tcp 127.0.0.1:2379->127.0.0.1:34450: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.929877 I | embed: rejected connection from "127.0.0.1:34448" (error "write tcp 127.0.0.1:2379->127.0.0.1:34448: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.931002 I | embed: rejected connection from "127.0.0.1:34486" (error "write tcp 127.0.0.1:2379->127.0.0.1:34486: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.931153 I | embed: rejected connection from "127.0.0.1:34950" (error "write tcp 127.0.0.1:2379->127.0.0.1:34950: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.932464 I | embed: rejected connection from "127.0.0.1:34860" (error "write tcp 127.0.0.1:2379->127.0.0.1:34860: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.932755 I | embed: rejected connection from "127.0.0.1:35138" (error "write tcp 127.0.0.1:2379->127.0.0.1:35138: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.933381 I | embed: rejected connection from "127.0.0.1:35046" (error "write tcp 127.0.0.1:2379->127.0.0.1:35046: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.933624 I | embed: rejected connection from "127.0.0.1:34968" (error "write tcp 127.0.0.1:2379->127.0.0.1:34968: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.933778 I | embed: rejected connection from "127.0.0.1:35004" (error "write tcp 127.0.0.1:2379->127.0.0.1:35004: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.934215 I | embed: rejected connection from "127.0.0.1:34954" (error "write tcp 127.0.0.1:2379->127.0.0.1:34954: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.935526 I | embed: rejected connection from "127.0.0.1:34922" (error "write tcp 127.0.0.1:2379->127.0.0.1:34922: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.935695 I | embed: rejected connection from "127.0.0.1:34936" (error "write tcp 127.0.0.1:2379->127.0.0.1:34936: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.935730 I | embed: rejected connection from "127.0.0.1:34878" (error "write tcp 127.0.0.1:2379->127.0.0.1:34878: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.936378 I | embed: rejected connection from "127.0.0.1:34848" (error "write tcp 127.0.0.1:2379->127.0.0.1:34848: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.936902 I | embed: rejected connection from "127.0.0.1:35146" (error "write tcp 127.0.0.1:2379->127.0.0.1:35146: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.936977 I | embed: rejected connection from "127.0.0.1:34498" (error "write tcp 127.0.0.1:2379->127.0.0.1:34498: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.937382 I | embed: rejected connection from "127.0.0.1:34952" (error "write tcp 127.0.0.1:2379->127.0.0.1:34952: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.938652 I | embed: rejected connection from "127.0.0.1:35126" (error "write tcp 127.0.0.1:2379->127.0.0.1:35126: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.938770 I | embed: rejected connection from "127.0.0.1:35120" (error "write tcp 127.0.0.1:2379->127.0.0.1:35120: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.939069 I | embed: rejected connection from "127.0.0.1:34920" (error "write tcp 127.0.0.1:2379->127.0.0.1:34920: write: broken pipe", ServerName "")
	2021-08-16 22:19:11.939248 I | embed: rejected connection from "127.0.0.1:34502" (error "write tcp 127.0.0.1:2379->127.0.0.1:34502: write: broken pipe", ServerName "")
	
	* 
	* ==> kernel <==
	*  22:19:31 up 58 min,  0 users,  load average: 2.28, 3.20, 2.29
	Linux pause-20210816221349-6487 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [e812d329ba697b1e1640c0e4b259cdaef06f8cb4498e9c761789a26059eb18ef] <==
	* I0816 22:19:21.842041       1 trace.go:205] Trace[289958795]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (16-Aug-2021 22:18:21.845) (total time: 59996ms):
	Trace[289958795]: [59.996320262s] [59.996320262s] END
	I0816 22:19:22.488231       1 trace.go:205] Trace[29487337]: "GuaranteedUpdate etcd3" type:*core.Node (16-Aug-2021 22:19:02.869) (total time: 19618ms):
	Trace[29487337]: ---"Transaction committed" 19617ms (22:19:00.488)
	Trace[29487337]: [19.618419774s] [19.618419774s] END
	I0816 22:19:22.488617       1 trace.go:205] Trace[1252951637]: "Update" url:/api/v1/nodes/pause-20210816221349-6487/status,user-agent:kube-controller-manager/v1.21.3 (linux/amd64) kubernetes/ca643a4/system:serviceaccount:kube-system:node-controller,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (16-Aug-2021 22:19:02.869) (total time: 19619ms):
	Trace[1252951637]: ---"Object stored in database" 19618ms (22:19:00.488)
	Trace[1252951637]: [19.619034129s] [19.619034129s] END
	I0816 22:19:23.981629       1 trace.go:205] Trace[1675584492]: "List etcd3" key:/resourcequotas/kube-public,resourceVersion:,resourceVersionMatch:,limit:0,continue: (16-Aug-2021 22:18:33.781) (total time: 50199ms):
	Trace[1675584492]: [50.199633862s] [50.199633862s] END
	I0816 22:19:23.981636       1 trace.go:205] Trace[1665405246]: "List etcd3" key:/resourcequotas/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (16-Aug-2021 22:18:37.566) (total time: 46414ms):
	Trace[1665405246]: [46.414964917s] [46.414964917s] END
	I0816 22:19:23.981758       1 trace.go:205] Trace[1931229347]: "List" url:/api/v1/namespaces/kube-public/resourcequotas,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (16-Aug-2021 22:18:33.781) (total time: 50199ms):
	Trace[1931229347]: ---"Listing from storage done" 50199ms (22:19:00.981)
	Trace[1931229347]: [50.199781473s] [50.199781473s] END
	I0816 22:19:23.981865       1 trace.go:205] Trace[2017992765]: "List" url:/api/v1/namespaces/default/resourcequotas,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (16-Aug-2021 22:18:37.566) (total time: 46415ms):
	Trace[2017992765]: ---"Listing from storage done" 46415ms (22:19:00.981)
	Trace[2017992765]: [46.415211105s] [46.415211105s] END
	W0816 22:19:24.018072       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:19:24.208146       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:19:25.096285       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:19:25.943668       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:19:26.656310       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:19:26.918536       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:19:27.697434       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	
	* 
	* ==> kube-controller-manager [8a5626e3acb8d6e6512e7bc8e176d7f98cbab112b7311beb2017dc277e1b04c1] <==
	* I0816 22:14:39.641950       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-hr2q5"
	I0816 22:14:39.651611       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-7wcqt"
	I0816 22:14:39.724203       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0816 22:14:39.724328       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0816 22:14:39.731288       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0816 22:14:39.736318       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-hr2q5"
	I0816 22:14:44.033494       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	E0816 22:16:18.834007       1 node_lifecycle_controller.go:1107] Error updating node pause-20210816221349-6487: rpc error: code = Unavailable desc = transport is closing
	E0816 22:17:18.835292       1 node_lifecycle_controller.go:801] Failed while getting a Node to retry updating node health. Probably Node pause-20210816221349-6487 was deleted.
	E0816 22:17:18.835317       1 node_lifecycle_controller.go:806] Update health of Node '' from Controller error: the server was unable to return a response in the time allotted, but may still be processing the request (get nodes pause-20210816221349-6487). Skipping - no pods will be evicted.
	I0816 22:17:23.835603       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: 
	E0816 22:17:57.848763       1 node_lifecycle_controller.go:1107] Error updating node pause-20210816221349-6487: Timeout: request did not complete within requested timeout context deadline exceeded
	E0816 22:18:57.849960       1 node_lifecycle_controller.go:801] Failed while getting a Node to retry updating node health. Probably Node pause-20210816221349-6487 was deleted.
	E0816 22:18:57.849980       1 node_lifecycle_controller.go:806] Update health of Node '' from Controller error: the server was unable to return a response in the time allotted, but may still be processing the request (get nodes pause-20210816221349-6487). Skipping - no pods will be evicted.
	I0816 22:19:02.850266       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: 
	I0816 22:19:22.495672       1 event.go:291] "Event occurred" object="pause-20210816221349-6487" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node pause-20210816221349-6487 status is now: NodeNotReady"
	I0816 22:19:22.507118       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager-pause-20210816221349-6487" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0816 22:19:22.511253       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-pause-20210816221349-6487" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0816 22:19:22.515555       1 event.go:291] "Event occurred" object="kube-system/kindnet-gqxwk" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0816 22:19:22.518905       1 event.go:291] "Event occurred" object="kube-system/kube-proxy-njz9n" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0816 22:19:22.524007       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db-7wcqt" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0816 22:19:22.527327       1 event.go:291] "Event occurred" object="kube-system/storage-provisioner" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0816 22:19:22.536072       1 event.go:291] "Event occurred" object="kube-system/etcd-pause-20210816221349-6487" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0816 22:19:22.538805       1 node_lifecycle_controller.go:1164] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0816 22:19:22.538849       1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-pause-20210816221349-6487" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	* 
	* ==> kube-proxy [1b3d3880e345b49ada1f5ef7d7e3d626a79c2d8af034d65f4bf86d62a01d7b04] <==
	* I0816 22:14:40.215578       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0816 22:14:40.215638       1 server_others.go:140] Detected node IP 192.168.49.2
	W0816 22:14:40.215676       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0816 22:14:40.247009       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0816 22:14:40.247045       1 server_others.go:212] Using iptables Proxier.
	I0816 22:14:40.247058       1 server_others.go:219] creating dualStackProxier for iptables.
	W0816 22:14:40.247072       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0816 22:14:40.247479       1 server.go:643] Version: v1.21.3
	I0816 22:14:40.248182       1 config.go:315] Starting service config controller
	I0816 22:14:40.248255       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:14:40.248210       1 config.go:224] Starting endpoint slice config controller
	I0816 22:14:40.248339       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0816 22:14:40.250530       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0816 22:14:40.251756       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:14:40.348781       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0816 22:14:40.348804       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [a65e43c156f4fceb68516be869bf03e5d844bd50bf1c543882409696da7e44b7] <==
	* E0816 22:14:17.590366       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:14:17.693992       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:17.713305       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:17.720174       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:14:19.024758       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:14:19.034977       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:14:19.119270       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:14:19.472008       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:14:19.474924       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:14:19.492908       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:19.552344       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:19.701087       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:19.846725       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:14:20.081603       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:14:20.288654       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 22:14:20.300668       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:14:20.446407       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:20.757057       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:14:23.039661       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:14:23.059708       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:14:23.337106       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:14:23.637126       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:14:23.867448       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:14:24.219179       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0816 22:14:26.331536       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:13:51 UTC, end at Mon 2021-08-16 22:19:31 UTC. --
	Aug 16 22:19:12 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:12.507382    4924 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229097    4924 container_manager_linux.go:283] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:false CgroupRoot: CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229136    4924 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229148    4924 container_manager_linux.go:314] "Initializing Topology Manager" policy="none" scope="container"
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229156    4924 container_manager_linux.go:319] "Creating device plugin manager" devicePluginEnabled=true
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229315    4924 util_unix.go:103] "Using this format as endpoint is deprecated, please consider using full url format." deprecatedFormat="/var/run/crio/crio.sock" fullURLFormat="unix:///var/run/crio/crio.sock"
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229356    4924 remote_runtime.go:62] parsed scheme: ""
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229364    4924 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229398    4924 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/crio/crio.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229407    4924 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229471    4924 util_unix.go:103] "Using this format as endpoint is deprecated, please consider using full url format." deprecatedFormat="/var/run/crio/crio.sock" fullURLFormat="unix:///var/run/crio/crio.sock"
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229483    4924 remote_image.go:50] parsed scheme: ""
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229488    4924 remote_image.go:50] scheme "" not registered, fallback to default scheme
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229496    4924 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/crio/crio.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229502    4924 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229578    4924 kubelet.go:404] "Attempting to sync node with API server"
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229596    4924 kubelet.go:272] "Adding static pod path" path="/etc/kubernetes/manifests"
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229624    4924 kubelet.go:283] "Adding apiserver pod source"
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.229640    4924 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.237613    4924 kuberuntime_manager.go:222] "Container runtime initialized" containerRuntime="cri-o" version="1.20.3" apiVersion="v1alpha1"
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: E0816 22:19:17.557870    4924 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
	Aug 16 22:19:17 pause-20210816221349-6487 kubelet[4924]: I0816 22:19:17.558460    4924 server.go:1190] "Started kubelet"
	Aug 16 22:19:17 pause-20210816221349-6487 systemd[1]: kubelet.service: Succeeded.
	Aug 16 22:19:17 pause-20210816221349-6487 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [2bd1364ac865c72405ac6ee6f6a80d71836437fd21ce8b5c65d60a7c93f73dee] <==
	* rs/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 63 [sync.Cond.Wait, 3 minutes]:
	sync.runtime_notifyListWait(0xc000046850, 0x0)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc000046840)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc0003f2420, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc00014af00, 0x18e5530, 0xc000047cc0, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00024a0e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00024a0e0, 0x18b3d60, 0xc000290000, 0x1, 0xc00008eea0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00024a0e0, 0x3b9aca00, 0x0, 0x1, 0xc00008eea0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc00024a0e0, 0x3b9aca00, 0xc00008eea0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20210816221349-6487 -n pause-20210816221349-6487
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20210816221349-6487 -n pause-20210816221349-6487: exit status 2 (345.144876ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context pause-20210816221349-6487 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestPause/serial/PauseAgain]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context pause-20210816221349-6487 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context pause-20210816221349-6487 describe pod : exit status 1 (73.332366ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context pause-20210816221349-6487 describe pod : exit status 1
--- FAIL: TestPause/serial/PauseAgain (19.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20210816221555-6487 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-20210816221555-6487 --alsologtostderr -v=1: exit status 80 (1.951590524s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-20210816221555-6487 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 22:24:26.737005  255169 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:24:26.737082  255169 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:24:26.737090  255169 out.go:311] Setting ErrFile to fd 2...
	I0816 22:24:26.737094  255169 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:24:26.737196  255169 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:24:26.737872  255169 out.go:305] Setting JSON to false
	I0816 22:24:26.737932  255169 mustload.go:65] Loading cluster: no-preload-20210816221555-6487
	I0816 22:24:26.738670  255169 config.go:177] Loaded profile config "no-preload-20210816221555-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0816 22:24:26.739124  255169 cli_runner.go:115] Run: docker container inspect no-preload-20210816221555-6487 --format={{.State.Status}}
	I0816 22:24:26.778250  255169 host.go:66] Checking if "no-preload-20210816221555-6487" exists ...
	I0816 22:24:26.778976  255169 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-20210816221555-6487 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0816 22:24:26.781733  255169 out.go:177] * Pausing node no-preload-20210816221555-6487 ... 
	I0816 22:24:26.781760  255169 host.go:66] Checking if "no-preload-20210816221555-6487" exists ...
	I0816 22:24:26.781973  255169 ssh_runner.go:149] Run: systemctl --version
	I0816 22:24:26.782008  255169 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210816221555-6487
	I0816 22:24:26.820756  255169 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32934 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816221555-6487/id_rsa Username:docker}
	I0816 22:24:26.911689  255169 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:24:26.920394  255169 pause.go:50] kubelet running: true
	I0816 22:24:26.920466  255169 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0816 22:24:27.073423  255169 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0816 22:24:27.073535  255169 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0816 22:24:27.145057  255169 cri.go:76] found id: "03ab9f1a4628206cf1e1ca0b6d15e457fa8e4988879154f8fb91512b2a4e77c6"
	I0816 22:24:27.145082  255169 cri.go:76] found id: "7f16dde1fc9b15e0c5a936ad881565e64ec0797e071ca0f3615f75be1d7a7ba5"
	I0816 22:24:27.145087  255169 cri.go:76] found id: "4499305eb0f7f962214785385bd797ca8b3baff025a5d2bf7a52b4107aca142c"
	I0816 22:24:27.145091  255169 cri.go:76] found id: "e919cbdfb244328186a80d3e1a9645c58a3901f4e9bbfcf04ae307ef0d568d5c"
	I0816 22:24:27.145095  255169 cri.go:76] found id: "43dbac811c6beba2363524514bdb89ffacc43063e93d24959fe2698b532d9852"
	I0816 22:24:27.145099  255169 cri.go:76] found id: "3b6e5503185321aa0b9f2a8dd00d97f87a6d5995e4ead91d0eb34f104511e1c1"
	I0816 22:24:27.145102  255169 cri.go:76] found id: "a5dbf4c341ee400d40847a5f8d81d87f6ae62104bf8f40f5c10b88c4913deb64"
	I0816 22:24:27.145106  255169 cri.go:76] found id: "5f39439703b1098bcb2c996d36f81db6b863d3bc7dda1a069509a87e2ff0a3b1"
	I0816 22:24:27.145111  255169 cri.go:76] found id: "17b741f1cf88828974c837b2de6dc1a07c30943a1b7fc0823246d4f3ece9069c"
	I0816 22:24:27.145120  255169 cri.go:76] found id: "dd8fdf37b52d4120ed233eea8cefb706b1179849b55c8f08029c1420d88e0310"
	I0816 22:24:27.145125  255169 cri.go:76] found id: ""
	I0816 22:24:27.145179  255169 ssh_runner.go:149] Run: sudo runc list -f json

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p no-preload-20210816221555-6487 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect no-preload-20210816221555-6487
helpers_test.go:236: (dbg) docker inspect no-preload-20210816221555-6487:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "65a501908096112bb2675466c1dd3aa24ab71cbe70391ffecee870c2dc6b0aa2",
	        "Created": "2021-08-16T22:15:57.474130005Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 218315,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-16T22:18:22.213784577Z",
	            "FinishedAt": "2021-08-16T22:18:19.937832982Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/65a501908096112bb2675466c1dd3aa24ab71cbe70391ffecee870c2dc6b0aa2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/65a501908096112bb2675466c1dd3aa24ab71cbe70391ffecee870c2dc6b0aa2/hostname",
	        "HostsPath": "/var/lib/docker/containers/65a501908096112bb2675466c1dd3aa24ab71cbe70391ffecee870c2dc6b0aa2/hosts",
	        "LogPath": "/var/lib/docker/containers/65a501908096112bb2675466c1dd3aa24ab71cbe70391ffecee870c2dc6b0aa2/65a501908096112bb2675466c1dd3aa24ab71cbe70391ffecee870c2dc6b0aa2-json.log",
	        "Name": "/no-preload-20210816221555-6487",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20210816221555-6487:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20210816221555-6487",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/119899da4ddcd887470ac68cc201dc9414c391f0c14a619ae7241b6c10f89bf3-init/diff:/var/lib/docker/overlay2/26ccb05b45c39eb8fb9a222c35182ae0b2281818311893cc1c820d93f21711ae/diff:/var/lib/docker/overlay2/cd717b1332cc33945d2b9d777346faa34d6bdfa47023daf36a4373532eccf421/diff:/var/lib/docker/overlay2/d9b45e6cc99a52e8f81dfe312587e9103ceac6c138634c90ec0d647cd90dcdc6/diff:/var/lib/docker/overlay2/253a269adcaa7871d92fb351ceb4d32421104cd62f741ba1c63d8e3f2e19a0b5/diff:/var/lib/docker/overlay2/42adedc2a706910fdba591e62cbfab57dd03d85b882f6eef38c3e131a0c52bea/diff:/var/lib/docker/overlay2/77ed2008192a16e91da5c6b9ccc6e71cea338a7cb24d331dc695e5f74fb04573/diff:/var/lib/docker/overlay2/29dccf868a296ce0889c929159fd3ac0dcdc00ba6e2ef864b056335954ffab4d/diff:/var/lib/docker/overlay2/592818f0a9ab8c99c2438562d9c5a479083cb08f41ad30ddefa9b493a8098150/diff:/var/lib/docker/overlay2/05fa44df096fbdb60f935bef69e952369e10cd8f1adc570c170c1f08d09743f7/diff:/var/lib/docker/overlay2/65cf48
1c471f231a663edc6331272317a851b0fd3395b2d6be43899f9b5443f9/diff:/var/lib/docker/overlay2/86a49202a7a2abea1a7a57c52f2e19cf991e9319b11dcf7f4105892413e0096e/diff:/var/lib/docker/overlay2/6f1a2a1805e90af512eef163f484935d57b0a0a72a65b5775c38efb5fe937d13/diff:/var/lib/docker/overlay2/f514eb992d78367550d2ce1c020771f984eff047302d6af3dcd71172270c0087/diff:/var/lib/docker/overlay2/cdae383a0302a2ed835985028caad813d6e16c7caf0e146005b47a4a1e94369b/diff:/var/lib/docker/overlay2/1fe02987c4f18db439e026431c44d1b8b233da766c4dcd331a4c20fd91e88748/diff:/var/lib/docker/overlay2/28bb6b828668bb682f3523f9f71fbd429ef6180cd98455625ea014ef4e532b77/diff:/var/lib/docker/overlay2/2878ae01f9dccb219097c2fc5873be677fe8a70e2bd936d25f018483a3d6c1a3/diff:/var/lib/docker/overlay2/da86efa9bad2d8b46c88440b1c4864add391f7cc9ee07d4227dd37d76c9ffd85/diff:/var/lib/docker/overlay2/93bf17ee1f2be70c70aa38b3aea9daa3a62e189181d1aa56acc52f5cf90f7436/diff:/var/lib/docker/overlay2/e8db603cd7ef789cc310bf7782375ee624d80ac0bceb3a3075900e67540c152a/diff:/var/lib/d
ocker/overlay2/1ba1c50d72480f49b835b34db6c3b21bccdf6530a8201631099a7e287dbf94a6/diff:/var/lib/docker/overlay2/2eeb0aecf508bc4a797d240b330bfa5b54b84a085737a96dad9faf9ea129d4ce/diff:/var/lib/docker/overlay2/40ba0dc970df76cce520350594f86384e121b47a190106559752e8f7cf138540/diff:/var/lib/docker/overlay2/8b80e2d0e53fb136919fd9ca7d4d198a92c324a19aa1827fcd672a3aa49a0dcc/diff:/var/lib/docker/overlay2/bd1d10c9a8876242ad61e3841c439177c3ae3a1967e5b053729107676134c622/diff:/var/lib/docker/overlay2/1f65ee5880c0ce6e027f2e72118cb956fcdcba9bb0806d98de6532bcbdfac6dd/diff:/var/lib/docker/overlay2/8bde131df5f26fa8ad65d4ddcb9ddb84de567427ffeb4aee76487489e530ef77/diff:/var/lib/docker/overlay2/73a68cda04f65298e5bd2581f0f16a8de96cd42145968d5c39b0c8ae3228a423/diff:/var/lib/docker/overlay2/0acfb650c0dfd3b9df005fe48f305b155d61d21914271c258e95bf42636e9bf8/diff:/var/lib/docker/overlay2/33e44b42ef88ffe285cb67e240248a629167dd1c83362d51a67679cbd866c30b/diff:/var/lib/docker/overlay2/c71ddf9e0475e4a3987bf3d616723e9d055a79c4bb6d92cf34fc67be2d9
c5134/diff:/var/lib/docker/overlay2/faece61d8cf45bab42cd499befcdb8d4548229311b35600cced068a3ce186025/diff:/var/lib/docker/overlay2/5e45db6ab7620f8baf25ff5ff01c806cd64575cdc89e5bb2965a25800093d5f9/diff:/var/lib/docker/overlay2/7324fe47bf39776cd61539f09f63fbb6f5e17aedc985bcc48bc12cdaaff4c4e4/diff:/var/lib/docker/overlay2/98a50fa9fa33654b2ed4ce8f7118f04d7004260519abca0d403b2ab337e9c273/diff:/var/lib/docker/overlay2/0e38707c109f3f1c54c7190021d525f898a86b07f2e51c86dc3e6d916eff3ed7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/119899da4ddcd887470ac68cc201dc9414c391f0c14a619ae7241b6c10f89bf3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/119899da4ddcd887470ac68cc201dc9414c391f0c14a619ae7241b6c10f89bf3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/119899da4ddcd887470ac68cc201dc9414c391f0c14a619ae7241b6c10f89bf3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20210816221555-6487",
	                "Source": "/var/lib/docker/volumes/no-preload-20210816221555-6487/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20210816221555-6487",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20210816221555-6487",
	                "name.minikube.sigs.k8s.io": "no-preload-20210816221555-6487",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c83a586e6640736ff362f4190d336a19ec88791ccc7bf52861d1aa8e554aeef4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32934"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32933"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32930"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32932"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32931"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c83a586e6640",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20210816221555-6487": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "65a501908096"
	                    ],
	                    "NetworkID": "98b3ee991257792a6830dd24d04b5717e00f2ba3533153c90cff98d22c7a9c0d",
	                    "EndpointID": "fb0e430b9a84cef5766a8f4e0b1c88858a3b4a797039b210ee6891535c00f7cb",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210816221555-6487 -n no-preload-20210816221555-6487
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210816221555-6487 -n no-preload-20210816221555-6487: exit status 2 (316.111474ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20210816221555-6487 logs -n 25
helpers_test.go:253: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| addons  | enable dashboard -p                               | old-k8s-version-20210816221528-6487            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:17:43 UTC | Mon, 16 Aug 2021 22:17:43 UTC |
	|         | old-k8s-version-20210816221528-6487               |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210816221555-6487                 | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:55 UTC | Mon, 16 Aug 2021 22:17:51 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                |         |         |                               |                               |
	|         | --driver=docker                                   |                                                |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:17:59 UTC | Mon, 16 Aug 2021 22:17:59 UTC |
	|         | no-preload-20210816221555-6487                    |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:17:59 UTC | Mon, 16 Aug 2021 22:18:20 UTC |
	|         | no-preload-20210816221555-6487                    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:18:20 UTC | Mon, 16 Aug 2021 22:18:20 UTC |
	|         | no-preload-20210816221555-6487                    |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| start   | -p                                                | cert-options-20210816221525-6487               | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:25 UTC | Mon, 16 Aug 2021 22:19:09 UTC |
	|         | cert-options-20210816221525-6487                  |                                                |         |         |                               |                               |
	|         | --memory=2048                                     |                                                |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                         |                                                |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15                     |                                                |         |         |                               |                               |
	|         | --apiserver-names=localhost                       |                                                |         |         |                               |                               |
	|         | --apiserver-names=www.google.com                  |                                                |         |         |                               |                               |
	|         | --apiserver-port=8555                             |                                                |         |         |                               |                               |
	|         | --driver=docker                                   |                                                |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                                |         |         |                               |                               |
	| -p      | cert-options-20210816221525-6487                  | cert-options-20210816221525-6487               | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:09 UTC | Mon, 16 Aug 2021 22:19:10 UTC |
	|         | ssh openssl x509 -text -noout -in                 |                                                |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt             |                                                |         |         |                               |                               |
	| unpause | -p pause-20210816221349-6487                      | pause-20210816221349-6487                      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:11 UTC | Mon, 16 Aug 2021 22:19:12 UTC |
	|         | --alsologtostderr -v=5                            |                                                |         |         |                               |                               |
	| delete  | -p                                                | cert-options-20210816221525-6487               | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:10 UTC | Mon, 16 Aug 2021 22:19:13 UTC |
	|         | cert-options-20210816221525-6487                  |                                                |         |         |                               |                               |
	| -p      | pause-20210816221349-6487 logs                    | pause-20210816221349-6487                      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:29 UTC | Mon, 16 Aug 2021 22:19:29 UTC |
	|         | -n 25                                             |                                                |         |         |                               |                               |
	| -p      | pause-20210816221349-6487 logs                    | pause-20210816221349-6487                      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:30 UTC | Mon, 16 Aug 2021 22:19:31 UTC |
	|         | -n 25                                             |                                                |         |         |                               |                               |
	| delete  | -p pause-20210816221349-6487                      | pause-20210816221349-6487                      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:32 UTC | Mon, 16 Aug 2021 22:19:35 UTC |
	|         | --alsologtostderr -v=5                            |                                                |         |         |                               |                               |
	| profile | list --output json                                | minikube                                       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:35 UTC | Mon, 16 Aug 2021 22:19:38 UTC |
	| delete  | -p pause-20210816221349-6487                      | pause-20210816221349-6487                      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:38 UTC | Mon, 16 Aug 2021 22:19:38 UTC |
	| delete  | -p                                                | disable-driver-mounts-20210816221938-6487      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:38 UTC | Mon, 16 Aug 2021 22:19:39 UTC |
	|         | disable-driver-mounts-20210816221938-6487         |                                                |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:39 UTC | Mon, 16 Aug 2021 22:20:32 UTC |
	|         | default-k8s-different-port-20210816221939-6487    |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio         |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:40 UTC | Mon, 16 Aug 2021 22:20:41 UTC |
	|         | default-k8s-different-port-20210816221939-6487    |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:13 UTC | Mon, 16 Aug 2021 22:20:45 UTC |
	|         | embed-certs-20210816221913-6487                   |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                |         |         |                               |                               |
	|         | --driver=docker                                   |                                                |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:53 UTC | Mon, 16 Aug 2021 22:20:54 UTC |
	|         | embed-certs-20210816221913-6487                   |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:41 UTC | Mon, 16 Aug 2021 22:21:02 UTC |
	|         | default-k8s-different-port-20210816221939-6487    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:02 UTC | Mon, 16 Aug 2021 22:21:02 UTC |
	|         | default-k8s-different-port-20210816221939-6487    |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:54 UTC | Mon, 16 Aug 2021 22:21:15 UTC |
	|         | embed-certs-20210816221913-6487                   |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:15 UTC | Mon, 16 Aug 2021 22:21:15 UTC |
	|         | embed-certs-20210816221913-6487                   |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210816221555-6487                 | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:18:20 UTC | Mon, 16 Aug 2021 22:24:11 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                |         |         |                               |                               |
	|         | --driver=docker                                   |                                                |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:26 UTC | Mon, 16 Aug 2021 22:24:26 UTC |
	|         | no-preload-20210816221555-6487                    |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:21:15
	Running on machine: debian-jenkins-agent-13
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:21:11.560087  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:13.560955  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:15.620136  240293 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:21:15.620202  240293 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:21:15.620205  240293 out.go:311] Setting ErrFile to fd 2...
	I0816 22:21:15.620209  240293 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:21:15.620308  240293 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:21:15.620555  240293 out.go:305] Setting JSON to false
	I0816 22:21:15.655608  240293 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-13","uptime":3643,"bootTime":1629148833,"procs":318,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:21:15.655702  240293 start.go:121] virtualization: kvm guest
	I0816 22:21:15.658735  240293 out.go:177] * [embed-certs-20210816221913-6487] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:21:15.660463  240293 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:21:15.658858  240293 notify.go:169] Checking for updates...
	I0816 22:21:15.662037  240293 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:21:15.663349  240293 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:21:15.664728  240293 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:21:15.665147  240293 config.go:177] Loaded profile config "embed-certs-20210816221913-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 22:21:15.665515  240293 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:21:15.716323  240293 docker.go:132] docker version: linux-19.03.15
	I0816 22:21:15.716389  240293 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:21:15.794376  240293 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-16 22:21:15.751543454 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 22:21:15.794459  240293 docker.go:244] overlay module found
	I0816 22:21:15.796939  240293 out.go:177] * Using the docker driver based on existing profile
	I0816 22:21:15.796963  240293 start.go:278] selected driver: docker
	I0816 22:21:15.796970  240293 start.go:751] validating driver "docker" against &{Name:embed-certs-20210816221913-6487 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210816221913-6487 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Mult
iNodeRequested:false ExtraDisks:0}
	I0816 22:21:15.797067  240293 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0816 22:21:15.797107  240293 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:21:15.797127  240293 out.go:242] ! Your cgroup does not allow setting memory.
	I0816 22:21:15.798748  240293 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:21:15.799574  240293 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:21:15.879748  240293 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-16 22:21:15.836065685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0816 22:21:15.879884  240293 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:21:15.879939  240293 out.go:242] ! Your cgroup does not allow setting memory.
	I0816 22:21:15.882041  240293 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:21:15.882141  240293 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 22:21:15.882164  240293 cni.go:93] Creating CNI manager for ""
	I0816 22:21:15.882176  240293 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:21:15.882186  240293 start_flags.go:277] config:
	{Name:embed-certs-20210816221913-6487 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210816221913-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:21:15.883921  240293 out.go:177] * Starting control plane node embed-certs-20210816221913-6487 in cluster embed-certs-20210816221913-6487
	I0816 22:21:15.883958  240293 cache.go:117] Beginning downloading kic base image for docker with crio
	I0816 22:21:15.885412  240293 out.go:177] * Pulling base image ...
	I0816 22:21:15.885439  240293 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0816 22:21:15.885471  240293 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0816 22:21:15.885482  240293 cache.go:56] Caching tarball of preloaded images
	I0816 22:21:15.885556  240293 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0816 22:21:15.885647  240293 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 22:21:15.885664  240293 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0816 22:21:15.886141  240293 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/config.json ...
	I0816 22:21:15.972930  240293 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0816 22:21:15.972958  240293 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0816 22:21:15.972971  240293 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:21:15.973015  240293 start.go:313] acquiring machines lock for embed-certs-20210816221913-6487: {Name:mkaa6840e29b8ce519208ca05a6868b89ed678ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:21:15.973147  240293 start.go:317] acquired machines lock for "embed-certs-20210816221913-6487" in 87.665µs
	I0816 22:21:15.973167  240293 start.go:93] Skipping create...Using existing machine configuration
	I0816 22:21:15.973173  240293 fix.go:55] fixHost starting: 
	I0816 22:21:15.973391  240293 cli_runner.go:115] Run: docker container inspect embed-certs-20210816221913-6487 --format={{.State.Status}}
	I0816 22:21:16.011364  240293 fix.go:108] recreateIfNeeded on embed-certs-20210816221913-6487: state=Stopped err=<nil>
	W0816 22:21:16.011393  240293 fix.go:134] unexpected machine state, will restart: <nil>
	I0816 22:21:12.498256  238595 api_server.go:164] Checking apiserver status ...
	I0816 22:21:12.498339  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:12.511166  238595 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:12.698333  238595 api_server.go:164] Checking apiserver status ...
	I0816 22:21:12.698432  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:12.711462  238595 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:12.898762  238595 api_server.go:164] Checking apiserver status ...
	I0816 22:21:12.898833  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:12.912242  238595 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:12.912260  238595 api_server.go:164] Checking apiserver status ...
	I0816 22:21:12.912297  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:12.924044  238595 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:12.924063  238595 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0816 22:21:12.924069  238595 kubeadm.go:1032] stopping kube-system containers ...
	I0816 22:21:12.924078  238595 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 22:21:12.924115  238595 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:21:12.947173  238595 cri.go:76] found id: ""
	I0816 22:21:12.947231  238595 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0816 22:21:12.955761  238595 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:21:12.962021  238595 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Aug 16 22:19 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug 16 22:19 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2123 Aug 16 22:20 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Aug 16 22:19 /etc/kubernetes/scheduler.conf
	
	I0816 22:21:12.962069  238595 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0816 22:21:12.968271  238595 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0816 22:21:12.974263  238595 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0816 22:21:12.980161  238595 kubeadm.go:165] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:12.980208  238595 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 22:21:12.985775  238595 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0816 22:21:12.991569  238595 kubeadm.go:165] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:12.991613  238595 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 22:21:12.997245  238595 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:21:13.003145  238595 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0816 22:21:13.003164  238595 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:21:13.063198  238595 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:21:13.524913  238595 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:21:13.647844  238595 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:21:13.728361  238595 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:21:13.785589  238595 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:21:13.785640  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:14.298633  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:14.798169  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:15.299035  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:15.799027  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:16.298175  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:16.798961  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:17.298777  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:15.557824  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:17.557988  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:16.013864  240293 out.go:177] * Restarting existing docker container for "embed-certs-20210816221913-6487" ...
	I0816 22:21:16.013932  240293 cli_runner.go:115] Run: docker start embed-certs-20210816221913-6487
	I0816 22:21:17.275157  240293 cli_runner.go:168] Completed: docker start embed-certs-20210816221913-6487: (1.261184914s)
	I0816 22:21:17.275241  240293 cli_runner.go:115] Run: docker container inspect embed-certs-20210816221913-6487 --format={{.State.Status}}
	I0816 22:21:17.315858  240293 kic.go:420] container "embed-certs-20210816221913-6487" state is running.
	I0816 22:21:17.316215  240293 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210816221913-6487
	I0816 22:21:17.360466  240293 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/config.json ...
	I0816 22:21:17.360650  240293 machine.go:88] provisioning docker machine ...
	I0816 22:21:17.360673  240293 ubuntu.go:169] provisioning hostname "embed-certs-20210816221913-6487"
	I0816 22:21:17.360721  240293 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:21:17.406246  240293 main.go:130] libmachine: Using SSH client type: native
	I0816 22:21:17.406446  240293 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32959 <nil> <nil>}
	I0816 22:21:17.406464  240293 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20210816221913-6487 && echo "embed-certs-20210816221913-6487" | sudo tee /etc/hostname
	I0816 22:21:17.406998  240293 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54360->127.0.0.1:32959: read: connection reset by peer
	I0816 22:21:16.061401  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:18.560309  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:20.561398  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:17.798143  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:18.298261  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:18.798948  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:19.298814  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:19.798123  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:20.298107  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:20.333342  238595 api_server.go:70] duration metric: took 6.547748926s to wait for apiserver process to appear ...
	I0816 22:21:20.333376  238595 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:21:20.333390  238595 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0816 22:21:20.333866  238595 api_server.go:255] stopped: https://192.168.49.2:8444/healthz: Get "https://192.168.49.2:8444/healthz": dial tcp 192.168.49.2:8444: connect: connection refused
	I0816 22:21:20.834559  238595 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0816 22:21:20.607653  240293 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20210816221913-6487
	
	I0816 22:21:20.607740  240293 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:21:20.659003  240293 main.go:130] libmachine: Using SSH client type: native
	I0816 22:21:20.659183  240293 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32959 <nil> <nil>}
	I0816 22:21:20.659208  240293 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20210816221913-6487' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20210816221913-6487/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20210816221913-6487' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:21:20.783095  240293 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:21:20.783122  240293 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem ServerCertRemotePath:/e
tc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:21:20.783145  240293 ubuntu.go:177] setting up certificates
	I0816 22:21:20.783165  240293 provision.go:83] configureAuth start
	I0816 22:21:20.783220  240293 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210816221913-6487
	I0816 22:21:20.830020  240293 provision.go:138] copyHostCerts
	I0816 22:21:20.830093  240293 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:21:20.830106  240293 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:21:20.830159  240293 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:21:20.830261  240293 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:21:20.830279  240293 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:21:20.830300  240293 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:21:20.830379  240293 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:21:20.830389  240293 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:21:20.830408  240293 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0816 22:21:20.830465  240293 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20210816221913-6487 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20210816221913-6487]
	I0816 22:21:20.944596  240293 provision.go:172] copyRemoteCerts
	I0816 22:21:20.944660  240293 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:21:20.944698  240293 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:21:20.987627  240293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32959 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:21:21.078832  240293 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:21:21.095178  240293 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0816 22:21:21.110394  240293 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 22:21:21.128860  240293 provision.go:86] duration metric: configureAuth took 345.684672ms
	I0816 22:21:21.128885  240293 ubuntu.go:193] setting minikube options for container-runtime
	I0816 22:21:21.129071  240293 config.go:177] Loaded profile config "embed-certs-20210816221913-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 22:21:21.129211  240293 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:21:21.178067  240293 main.go:130] libmachine: Using SSH client type: native
	I0816 22:21:21.178225  240293 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32959 <nil> <nil>}
	I0816 22:21:21.178249  240293 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 22:21:21.688631  240293 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 22:21:21.688661  240293 machine.go:91] provisioned docker machine in 4.327996373s
	I0816 22:21:21.688673  240293 start.go:267] post-start starting for "embed-certs-20210816221913-6487" (driver="docker")
	I0816 22:21:21.688686  240293 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:21:21.688733  240293 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:21:21.688776  240293 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:21:21.742815  240293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32959 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:21:21.831891  240293 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:21:21.834999  240293 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0816 22:21:21.835025  240293 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0816 22:21:21.835039  240293 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0816 22:21:21.835049  240293 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0816 22:21:21.835063  240293 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:21:21.835113  240293 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:21:21.835204  240293 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem -> 64872.pem in /etc/ssl/certs
	I0816 22:21:21.835315  240293 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:21:21.844143  240293 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:21:21.861729  240293 start.go:270] post-start completed in 173.038547ms
	I0816 22:21:21.861791  240293 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 22:21:21.861839  240293 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:21:21.904829  240293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32959 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:21:21.991978  240293 fix.go:57] fixHost completed within 6.018798061s
	I0816 22:21:21.992001  240293 start.go:80] releasing machines lock for "embed-certs-20210816221913-6487", held for 6.018842392s
	I0816 22:21:21.992085  240293 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210816221913-6487
	I0816 22:21:22.036698  240293 ssh_runner.go:149] Run: systemctl --version
	I0816 22:21:22.036732  240293 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:21:22.036762  240293 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:21:22.036793  240293 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:21:22.090133  240293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32959 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:21:22.090549  240293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32959 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:21:22.209931  240293 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0816 22:21:22.220079  240293 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0816 22:21:22.230101  240293 docker.go:153] disabling docker service ...
	I0816 22:21:22.230151  240293 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:21:22.240560  240293 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:21:22.249789  240293 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:21:22.317235  240293 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:21:22.384532  240293 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:21:22.394190  240293 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:21:22.406563  240293 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0816 22:21:22.414255  240293 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0816 22:21:22.414278  240293 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0816 22:21:22.421925  240293 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:21:22.428474  240293 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:21:22.428529  240293 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:21:22.435893  240293 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:21:22.441981  240293 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:21:22.516727  240293 ssh_runner.go:149] Run: sudo systemctl start crio
	I0816 22:21:22.526452  240293 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 22:21:22.526514  240293 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:21:22.529600  240293 start.go:413] Will wait 60s for crictl version
	I0816 22:21:22.529654  240293 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:21:22.563406  240293 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0816 22:21:22.563494  240293 ssh_runner.go:149] Run: crio --version
	I0816 22:21:22.625584  240293 ssh_runner.go:149] Run: crio --version
	I0816 22:21:20.058201  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:22.058236  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:22.691739  240293 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	I0816 22:21:22.691826  240293 cli_runner.go:115] Run: docker network inspect embed-certs-20210816221913-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 22:21:22.738471  240293 ssh_runner.go:149] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0816 22:21:22.742507  240293 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:21:22.751708  240293 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0816 22:21:22.751767  240293 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:21:22.780397  240293 crio.go:424] all images are preloaded for cri-o runtime.
	I0816 22:21:22.780419  240293 crio.go:333] Images already preloaded, skipping extraction
	I0816 22:21:22.780466  240293 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:21:22.802073  240293 crio.go:424] all images are preloaded for cri-o runtime.
	I0816 22:21:22.802099  240293 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:21:22.802167  240293 ssh_runner.go:149] Run: crio config
	I0816 22:21:22.873852  240293 cni.go:93] Creating CNI manager for ""
	I0816 22:21:22.873875  240293 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:21:22.873884  240293 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0816 22:21:22.873896  240293 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20210816221913-6487 NodeName:embed-certs-20210816221913-6487 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/va
r/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:21:22.874056  240293 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "embed-certs-20210816221913-6487"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:21:22.874158  240293 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=embed-certs-20210816221913-6487 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210816221913-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0816 22:21:22.874214  240293 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0816 22:21:22.892215  240293 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:21:22.892279  240293 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:21:22.898805  240293 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (562 bytes)
	I0816 22:21:22.910838  240293 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 22:21:22.922709  240293 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2072 bytes)
	I0816 22:21:22.935781  240293 ssh_runner.go:149] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0816 22:21:22.938475  240293 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:21:22.946612  240293 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487 for IP: 192.168.76.2
	I0816 22:21:22.946658  240293 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:21:22.946680  240293 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:21:22.946734  240293 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/client.key
	I0816 22:21:22.946758  240293 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/apiserver.key.31bdca25
	I0816 22:21:22.946785  240293 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/proxy-client.key
	I0816 22:21:22.946930  240293 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem (1338 bytes)
	W0816 22:21:22.946980  240293 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487_empty.pem, impossibly tiny 0 bytes
	I0816 22:21:22.946995  240293 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 22:21:22.947031  240293 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:21:22.947069  240293 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:21:22.947100  240293 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0816 22:21:22.947152  240293 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:21:22.948307  240293 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:21:22.963707  240293 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 22:21:22.978856  240293 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:21:22.994139  240293 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 22:21:23.010398  240293 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:21:23.025797  240293 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 22:21:23.042070  240293 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:21:23.058339  240293 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 22:21:23.073522  240293 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:21:23.092171  240293 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem --> /usr/share/ca-certificates/6487.pem (1338 bytes)
	I0816 22:21:23.112153  240293 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /usr/share/ca-certificates/64872.pem (1708 bytes)
	I0816 22:21:23.127612  240293 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:21:23.138860  240293 ssh_runner.go:149] Run: openssl version
	I0816 22:21:23.143303  240293 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/64872.pem && ln -fs /usr/share/ca-certificates/64872.pem /etc/ssl/certs/64872.pem"
	I0816 22:21:23.149943  240293 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/64872.pem
	I0816 22:21:23.152702  240293 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:50 /usr/share/ca-certificates/64872.pem
	I0816 22:21:23.152740  240293 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/64872.pem
	I0816 22:21:23.157086  240293 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/64872.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:21:23.162960  240293 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:21:23.169440  240293 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:21:23.172300  240293 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:21:23.172333  240293 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:21:23.176717  240293 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:21:23.183108  240293 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6487.pem && ln -fs /usr/share/ca-certificates/6487.pem /etc/ssl/certs/6487.pem"
	I0816 22:21:23.189796  240293 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6487.pem
	I0816 22:21:23.192572  240293 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:50 /usr/share/ca-certificates/6487.pem
	I0816 22:21:23.192617  240293 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6487.pem
	I0816 22:21:23.196953  240293 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6487.pem /etc/ssl/certs/51391683.0"
	I0816 22:21:23.202869  240293 kubeadm.go:390] StartCluster: {Name:embed-certs-20210816221913-6487 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210816221913-6487 Namespace:default APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false
ExtraDisks:0}
	I0816 22:21:23.202969  240293 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 22:21:23.203000  240293 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:21:23.225669  240293 cri.go:76] found id: ""
	I0816 22:21:23.225727  240293 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:21:23.231865  240293 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0816 22:21:23.231889  240293 kubeadm.go:600] restartCluster start
	I0816 22:21:23.231953  240293 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0816 22:21:23.237613  240293 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:23.238927  240293 kubeconfig.go:117] verify returned: extract IP: "embed-certs-20210816221913-6487" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:21:23.239535  240293 kubeconfig.go:128] "embed-certs-20210816221913-6487" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0816 22:21:23.240624  240293 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:21:23.243940  240293 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 22:21:23.249774  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:23.249811  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:23.261064  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:23.461439  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:23.461533  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:23.474779  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:23.662020  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:23.662099  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:23.675194  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:23.861451  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:23.861530  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:23.874438  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:24.061711  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:24.061771  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:24.074758  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:24.262079  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:24.262150  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:24.275524  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:24.461740  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:24.461804  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:24.474796  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:24.662097  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:24.662170  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:24.675520  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:24.861779  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:24.861849  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:24.874503  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:25.061773  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:25.061835  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:25.074999  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:25.261220  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:25.261320  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:25.274763  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:25.462071  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:25.462139  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:25.475399  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:22.563949  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:25.061018  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:24.749578  238595 api_server.go:265] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:21:24.749607  238595 api_server.go:101] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:21:24.834697  238595 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0816 22:21:24.839645  238595 api_server.go:265] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:21:24.839665  238595 api_server.go:101] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:21:25.334041  238595 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0816 22:21:25.338514  238595 api_server.go:265] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:21:25.338534  238595 api_server.go:101] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:21:25.834845  238595 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0816 22:21:25.841942  238595 api_server.go:265] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:21:25.841973  238595 api_server.go:101] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:21:26.334549  238595 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0816 22:21:26.339143  238595 api_server.go:265] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0816 22:21:26.345794  238595 api_server.go:139] control plane version: v1.21.3
	I0816 22:21:26.345821  238595 api_server.go:129] duration metric: took 6.012437633s to wait for apiserver health ...
	I0816 22:21:26.345834  238595 cni.go:93] Creating CNI manager for ""
	I0816 22:21:26.345842  238595 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:21:26.348018  238595 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 22:21:26.348067  238595 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0816 22:21:26.351496  238595 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0816 22:21:26.351513  238595 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0816 22:21:26.364862  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 22:21:26.663402  238595 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:21:26.673137  238595 system_pods.go:59] 9 kube-system pods found
	I0816 22:21:26.673169  238595 system_pods.go:61] "coredns-558bd4d5db-zv2bp" [3f9aaeed-d94b-4e8e-8ee1-8b4e7e2bad94] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:21:26.673175  238595 system_pods.go:61] "etcd-default-k8s-different-port-20210816221939-6487" [1edca304-f707-4964-9308-e948e78bbe97] Running
	I0816 22:21:26.673180  238595 system_pods.go:61] "kindnet-dlmtk" [44f0eada-8ea2-426e-9c49-cabf7add8b7c] Running
	I0816 22:21:26.673190  238595 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210816221939-6487" [18a403fd-f4c6-4bc4-bace-a7ee9d3397d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 22:21:26.673201  238595 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210816221939-6487" [79dc21b4-76c3-4911-96e4-3296115b78e9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 22:21:26.673211  238595 system_pods.go:61] "kube-proxy-zb9nn" [3acb9237-9d5e-44cc-8304-181f590ae0ef] Running
	I0816 22:21:26.673223  238595 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210816221939-6487" [18bc905c-fe6d-4e42-b9d1-f09bcac3a454] Running
	I0816 22:21:26.673232  238595 system_pods.go:61] "metrics-server-7c784ccb57-z8svs" [5f3b5b23-5056-4e3f-bc57-ae88895f06ea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:21:26.673239  238595 system_pods.go:61] "storage-provisioner" [a2136862-0a6e-4594-8f38-bed49ceca1af] Running
	I0816 22:21:26.673244  238595 system_pods.go:74] duration metric: took 9.823114ms to wait for pod list to return data ...
	I0816 22:21:26.673253  238595 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:21:26.676339  238595 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:21:26.676371  238595 node_conditions.go:123] node cpu capacity is 8
	I0816 22:21:26.676381  238595 node_conditions.go:105] duration metric: took 3.124111ms to run NodePressure ...
	I0816 22:21:26.676397  238595 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:21:27.128593  238595 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0816 22:21:27.132902  238595 kubeadm.go:746] kubelet initialised
	I0816 22:21:27.132928  238595 kubeadm.go:747] duration metric: took 4.310514ms waiting for restarted kubelet to initialise ...
	I0816 22:21:27.132936  238595 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:21:27.139524  238595 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-zv2bp" in "kube-system" namespace to be "Ready" ...
	I0816 22:21:24.558923  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:27.058005  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:25.661879  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:25.661942  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:25.674926  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:25.861996  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:25.862079  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:25.876444  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:26.061650  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:26.061708  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:26.074752  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:26.262045  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:26.262107  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:26.274819  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:26.274838  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:26.274877  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:26.286135  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:26.286151  240293 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0816 22:21:26.286157  240293 kubeadm.go:1032] stopping kube-system containers ...
	I0816 22:21:26.286166  240293 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 22:21:26.286204  240293 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:21:26.332734  240293 cri.go:76] found id: ""
	I0816 22:21:26.332793  240293 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0816 22:21:26.342135  240293 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:21:26.349389  240293 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5643 Aug 16 22:19 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug 16 22:19 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Aug 16 22:19 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 16 22:19 /etc/kubernetes/scheduler.conf
	
	I0816 22:21:26.349444  240293 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 22:21:26.356234  240293 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 22:21:26.363575  240293 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 22:21:26.370139  240293 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:26.370188  240293 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 22:21:26.377375  240293 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 22:21:26.385240  240293 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:26.385288  240293 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 22:21:26.391547  240293 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:21:26.399103  240293 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0816 22:21:26.399125  240293 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:21:26.470913  240293 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:21:27.372191  240293 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:21:27.532311  240293 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:21:27.631997  240293 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:21:27.703108  240293 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:21:27.703169  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:28.217409  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:28.717450  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:29.217644  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:29.717503  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:30.217609  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:27.563642  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:30.059977  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:29.154595  238595 pod_ready.go:102] pod "coredns-558bd4d5db-zv2bp" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:31.155291  238595 pod_ready.go:102] pod "coredns-558bd4d5db-zv2bp" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:29.557639  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:32.057361  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:30.717020  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:31.217129  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:31.716944  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:32.217589  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:32.717615  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:33.217826  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:33.717123  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:34.217056  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:34.236648  240293 api_server.go:70] duration metric: took 6.533533708s to wait for apiserver process to appear ...
	I0816 22:21:34.236675  240293 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:21:34.236687  240293 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0816 22:21:34.237080  240293 api_server.go:255] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0816 22:21:34.737550  240293 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0816 22:21:32.060908  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:34.061201  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:33.656544  238595 pod_ready.go:102] pod "coredns-558bd4d5db-zv2bp" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:34.655844  238595 pod_ready.go:92] pod "coredns-558bd4d5db-zv2bp" in "kube-system" namespace has status "Ready":"True"
	I0816 22:21:34.655868  238595 pod_ready.go:81] duration metric: took 7.516321054s waiting for pod "coredns-558bd4d5db-zv2bp" in "kube-system" namespace to be "Ready" ...
	I0816 22:21:34.655882  238595 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:21:36.665562  238595 pod_ready.go:102] pod "etcd-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:34.058752  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:36.558116  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:38.423548  240293 api_server.go:265] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:21:38.423576  240293 api_server.go:101] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:21:38.737956  240293 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0816 22:21:38.742611  240293 api_server.go:265] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:21:38.742646  240293 api_server.go:101] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:21:39.238185  240293 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0816 22:21:39.243127  240293 api_server.go:265] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:21:39.243156  240293 api_server.go:101] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:21:39.737204  240293 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0816 22:21:39.744129  240293 api_server.go:265] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0816 22:21:39.750956  240293 api_server.go:139] control plane version: v1.21.3
	I0816 22:21:39.750980  240293 api_server.go:129] duration metric: took 5.51429844s to wait for apiserver health ...
	I0816 22:21:39.750991  240293 cni.go:93] Creating CNI manager for ""
	I0816 22:21:39.751006  240293 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:21:39.752807  240293 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 22:21:39.752886  240293 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0816 22:21:39.756576  240293 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0816 22:21:39.756596  240293 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0816 22:21:39.769956  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 22:21:40.241874  240293 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:21:40.254628  240293 system_pods.go:59] 9 kube-system pods found
	I0816 22:21:40.254668  240293 system_pods.go:61] "coredns-558bd4d5db-s5rfs" [7146b870-68ad-407d-b3b2-bb620597d79a] Running
	I0816 22:21:40.254677  240293 system_pods.go:61] "etcd-embed-certs-20210816221913-6487" [94512d56-b2c2-427f-8b2d-c21aacd20a0e] Running
	I0816 22:21:40.254682  240293 system_pods.go:61] "kindnet-jx4gt" [569ce0ea-ff92-4730-a357-d37774ec5a9d] Running
	I0816 22:21:40.254688  240293 system_pods.go:61] "kube-apiserver-embed-certs-20210816221913-6487" [1d8f5d6e-36f4-4898-bcb8-3f2eba68010b] Running
	I0816 22:21:40.254700  240293 system_pods.go:61] "kube-controller-manager-embed-certs-20210816221913-6487" [ef9f2613-17c4-4bdb-b74b-299fc20cf91d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 22:21:40.254707  240293 system_pods.go:61] "kube-proxy-ldxgj" [78bb138b-ce0a-41f2-a0c6-a4000e1146e2] Running
	I0816 22:21:40.254718  240293 system_pods.go:61] "kube-scheduler-embed-certs-20210816221913-6487" [e14e1b22-e7f9-4a56-9585-b24426755bdb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 22:21:40.254727  240293 system_pods.go:61] "metrics-server-7c784ccb57-pqdqv" [07fd0b54-2e74-462c-a333-e4bb7cdc6570] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:21:40.254733  240293 system_pods.go:61] "storage-provisioner" [f7dea14f-5a94-4bc2-9b3f-0f51d33cd218] Running
	I0816 22:21:40.254741  240293 system_pods.go:74] duration metric: took 12.847497ms to wait for pod list to return data ...
	I0816 22:21:40.254761  240293 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:21:40.258773  240293 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:21:40.258801  240293 node_conditions.go:123] node cpu capacity is 8
	I0816 22:21:40.258816  240293 node_conditions.go:105] duration metric: took 4.048388ms to run NodePressure ...
	I0816 22:21:40.258833  240293 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:21:36.561262  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:39.060084  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:38.666063  238595 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:21:38.666091  238595 pod_ready.go:81] duration metric: took 4.010195402s waiting for pod "etcd-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:21:38.666109  238595 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:21:38.670127  238595 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:21:38.670145  238595 pod_ready.go:81] duration metric: took 4.027483ms waiting for pod "kube-apiserver-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:21:38.670156  238595 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:21:38.674043  238595 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:21:38.674062  238595 pod_ready.go:81] duration metric: took 3.899459ms waiting for pod "kube-controller-manager-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:21:38.674075  238595 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zb9nn" in "kube-system" namespace to be "Ready" ...
	I0816 22:21:38.677789  238595 pod_ready.go:92] pod "kube-proxy-zb9nn" in "kube-system" namespace has status "Ready":"True"
	I0816 22:21:38.677807  238595 pod_ready.go:81] duration metric: took 3.723435ms waiting for pod "kube-proxy-zb9nn" in "kube-system" namespace to be "Ready" ...
	I0816 22:21:38.677818  238595 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:21:38.681155  238595 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:21:38.681177  238595 pod_ready.go:81] duration metric: took 3.350092ms waiting for pod "kube-scheduler-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:21:38.681187  238595 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace to be "Ready" ...
	I0816 22:21:41.069361  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:39.058024  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:41.558288  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:40.822751  240293 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0816 22:21:40.826839  240293 kubeadm.go:746] kubelet initialised
	I0816 22:21:40.826858  240293 kubeadm.go:747] duration metric: took 4.079302ms waiting for restarted kubelet to initialise ...
	I0816 22:21:40.826865  240293 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:21:40.832096  240293 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace to be "Ready" ...
	I0816 22:21:42.846724  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:44.847780  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:41.560606  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:44.060244  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:43.070196  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:45.569472  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:44.057149  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:46.057919  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:48.557352  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:47.346793  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:49.347215  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:46.560497  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:49.060150  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:47.570048  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:50.070010  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:52.070043  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:51.057621  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:53.557774  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:51.348115  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:53.417886  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:51.060747  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:53.561154  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:54.570252  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:57.070456  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:55.557834  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:58.057474  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:55.846812  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:57.849366  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:00.348217  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:56.060197  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:58.560545  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:00.561053  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:59.569461  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:01.569894  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:00.556730  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:02.558818  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:02.846906  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:04.847312  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:03.059324  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:05.060842  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:03.570105  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:06.069210  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:05.057740  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:07.557500  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:07.349324  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:09.848075  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:07.060941  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:09.061382  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:08.069720  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:10.069872  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:12.069927  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:10.057865  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:12.557313  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:12.346961  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:14.347554  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:11.560761  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:14.060556  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:14.570237  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:17.070162  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:14.557390  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:16.557840  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:18.557906  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:16.347642  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:18.847528  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:16.560811  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:19.060192  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:19.070665  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:21.569617  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:21.057593  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:23.057769  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:21.350304  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:22.349597  240293 pod_ready.go:92] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"True"
	I0816 22:22:22.349624  240293 pod_ready.go:81] duration metric: took 41.517503384s waiting for pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace to be "Ready" ...
	I0816 22:22:22.349635  240293 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:22:22.353381  240293 pod_ready.go:92] pod "etcd-embed-certs-20210816221913-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:22:22.353398  240293 pod_ready.go:81] duration metric: took 3.753707ms waiting for pod "etcd-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:22:22.353416  240293 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:22:22.357098  240293 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210816221913-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:22:22.357119  240293 pod_ready.go:81] duration metric: took 3.696004ms waiting for pod "kube-apiserver-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:22:22.357128  240293 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:22:22.360781  240293 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210816221913-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:22:22.360801  240293 pod_ready.go:81] duration metric: took 3.66616ms waiting for pod "kube-controller-manager-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:22:22.360813  240293 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ldxgj" in "kube-system" namespace to be "Ready" ...
	I0816 22:22:22.364443  240293 pod_ready.go:92] pod "kube-proxy-ldxgj" in "kube-system" namespace has status "Ready":"True"
	I0816 22:22:22.364458  240293 pod_ready.go:81] duration metric: took 3.637945ms waiting for pod "kube-proxy-ldxgj" in "kube-system" namespace to be "Ready" ...
	I0816 22:22:22.364465  240293 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:22:22.746026  240293 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210816221913-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:22:22.746046  240293 pod_ready.go:81] duration metric: took 381.574921ms waiting for pod "kube-scheduler-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:22:22.746057  240293 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace to be "Ready" ...
	I0816 22:22:25.151863  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:21.060611  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:23.560974  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:24.070906  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:26.570138  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:25.557675  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:27.557835  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:27.650260  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:29.651969  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:26.059835  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:28.060143  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:30.559808  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:29.068839  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:31.069013  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:29.557981  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:32.057453  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:32.150995  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:34.650482  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:32.560209  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:34.560989  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:33.073518  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:35.570047  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:34.057526  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:36.556985  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:38.557172  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:37.151419  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:39.152016  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:37.060468  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:39.060676  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:38.069959  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:40.070255  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:41.057437  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:42.552389  213866 pod_ready.go:81] duration metric: took 4m0.400173208s waiting for pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace to be "Ready" ...
	E0816 22:22:42.552422  213866 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 22:22:42.552443  213866 pod_ready.go:38] duration metric: took 4m3.999681718s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:22:42.552468  213866 kubeadm.go:604] restartCluster took 4m52.206606075s
	W0816 22:22:42.552592  213866 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0816 22:22:42.552618  213866 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 22:22:41.650639  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:44.151997  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:41.560954  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:44.060209  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:42.570273  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:45.070127  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:47.070330  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:46.650980  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:49.150679  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:46.560003  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:48.560502  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:49.570288  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:52.069874  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:51.151426  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:53.651600  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:51.060391  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:53.559430  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:55.556159  218005 pod_ready.go:81] duration metric: took 4m0.006201053s waiting for pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace to be "Ready" ...
	E0816 22:22:55.556182  218005 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 22:22:55.556209  218005 pod_ready.go:38] duration metric: took 4m11.600806981s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:22:55.556239  218005 kubeadm.go:604] restartCluster took 4m27.564944846s
	W0816 22:22:55.556359  218005 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0816 22:22:55.556394  218005 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 22:22:54.569550  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:56.570201  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:59.549186  213866 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (16.996549042s)
	I0816 22:22:59.549246  213866 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:22:59.558782  213866 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 22:22:59.558841  213866 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:22:59.582632  213866 cri.go:76] found id: ""
	I0816 22:22:59.582680  213866 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:22:59.589250  213866 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0816 22:22:59.589313  213866 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:22:59.595410  213866 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:22:59.595449  213866 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0816 22:22:56.151598  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:58.151860  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:00.152384  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:59.903823  213866 out.go:204]   - Generating certificates and keys ...
	I0816 22:22:58.570653  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:01.070217  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:00.931087  213866 out.go:204]   - Booting up control plane ...
	I0816 22:23:02.153047  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:04.651269  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:03.070450  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:05.569721  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:07.151505  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:09.652219  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:10.973536  213866 out.go:204]   - Configuring RBAC rules ...
	I0816 22:23:11.388904  213866 cni.go:93] Creating CNI manager for ""
	I0816 22:23:11.388927  213866 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:23:07.570517  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:10.070537  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:11.390636  213866 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 22:23:11.390703  213866 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0816 22:23:11.394246  213866 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.14.0/kubectl ...
	I0816 22:23:11.394264  213866 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0816 22:23:11.406315  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 22:23:11.608487  213866 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:23:11.608574  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:11.608583  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=old-k8s-version-20210816221528-6487 minikube.k8s.io/updated_at=2021_08_16T22_23_11_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:11.713735  213866 ops.go:34] apiserver oom_adj: 16
	I0816 22:23:11.713761  213866 ops.go:39] adjusting apiserver oom_adj to -10
	I0816 22:23:11.713784  213866 ssh_runner.go:149] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:23:11.713864  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:12.275854  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:12.776002  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:13.276134  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:12.151756  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:14.650904  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:12.570276  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:14.570427  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:17.072069  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:13.775987  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:14.276358  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:14.775456  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:15.276056  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:15.775401  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:16.276191  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:16.775833  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:17.276185  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:17.775937  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:18.276372  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:17.151526  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:19.651003  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:19.570330  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:22.069741  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:18.775755  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:19.276305  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:19.775502  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:20.276246  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:20.775975  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:21.275393  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:21.776071  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:22.276202  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:22.776321  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:23.275752  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:22.151401  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:24.152069  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:24.069903  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:26.071687  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:23.775524  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:24.276016  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:24.775772  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:25.276335  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:25.775974  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:26.275589  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:26.775735  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:27.275457  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:27.341729  213866 kubeadm.go:985] duration metric: took 15.733205702s to wait for elevateKubeSystemPrivileges.
	I0816 22:23:27.341763  213866 kubeadm.go:392] StartCluster complete in 5m37.024207841s
	I0816 22:23:27.341785  213866 settings.go:142] acquiring lock: {Name:mk71c9e00e0a208f5191d6b85d29a074b46503a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:23:27.341894  213866 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:23:27.343738  213866 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:23:27.861292  213866 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20210816221528-6487" rescaled to 1
	I0816 22:23:27.861357  213866 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0816 22:23:27.863262  213866 out.go:177] * Verifying Kubernetes components...
	I0816 22:23:27.863325  213866 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:23:27.861411  213866 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:23:27.861427  213866 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:23:27.861565  213866 config.go:177] Loaded profile config "old-k8s-version-20210816221528-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0816 22:23:27.863424  213866 addons.go:59] Setting dashboard=true in profile "old-k8s-version-20210816221528-6487"
	I0816 22:23:27.863441  213866 addons.go:59] Setting default-storageclass=true in profile "old-k8s-version-20210816221528-6487"
	I0816 22:23:27.863445  213866 addons.go:135] Setting addon dashboard=true in "old-k8s-version-20210816221528-6487"
	W0816 22:23:27.863454  213866 addons.go:147] addon dashboard should already be in state true
	I0816 22:23:27.863466  213866 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20210816221528-6487"
	I0816 22:23:27.863417  213866 addons.go:59] Setting storage-provisioner=true in profile "old-k8s-version-20210816221528-6487"
	I0816 22:23:27.863485  213866 host.go:66] Checking if "old-k8s-version-20210816221528-6487" exists ...
	I0816 22:23:27.863497  213866 addons.go:135] Setting addon storage-provisioner=true in "old-k8s-version-20210816221528-6487"
	I0816 22:23:27.863429  213866 addons.go:59] Setting metrics-server=true in profile "old-k8s-version-20210816221528-6487"
	I0816 22:23:27.863532  213866 addons.go:135] Setting addon metrics-server=true in "old-k8s-version-20210816221528-6487"
	W0816 22:23:27.863541  213866 addons.go:147] addon metrics-server should already be in state true
	I0816 22:23:27.863566  213866 host.go:66] Checking if "old-k8s-version-20210816221528-6487" exists ...
	W0816 22:23:27.863509  213866 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:23:27.863643  213866 host.go:66] Checking if "old-k8s-version-20210816221528-6487" exists ...
	I0816 22:23:27.863788  213866 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210816221528-6487 --format={{.State.Status}}
	I0816 22:23:27.864032  213866 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210816221528-6487 --format={{.State.Status}}
	I0816 22:23:27.864037  213866 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210816221528-6487 --format={{.State.Status}}
	I0816 22:23:27.864252  213866 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210816221528-6487 --format={{.State.Status}}
	I0816 22:23:27.927695  213866 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:23:27.927754  213866 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:23:27.927763  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:23:27.927817  213866 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210816221528-6487
	I0816 22:23:27.929360  213866 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:23:27.930891  213866 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:23:27.930964  213866 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:23:27.930976  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:23:27.931027  213866 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210816221528-6487
	I0816 22:23:27.935860  213866 addons.go:135] Setting addon default-storageclass=true in "old-k8s-version-20210816221528-6487"
	W0816 22:23:27.935883  213866 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:23:27.935953  213866 host.go:66] Checking if "old-k8s-version-20210816221528-6487" exists ...
	I0816 22:23:27.936440  213866 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210816221528-6487 --format={{.State.Status}}
	I0816 22:23:27.938570  213866 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:23:27.938670  213866 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:23:27.938685  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:23:27.938730  213866 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210816221528-6487
	I0816 22:23:27.964347  213866 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20210816221528-6487" to be "Ready" ...
	I0816 22:23:27.964621  213866 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:23:27.984062  213866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32929 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816221528-6487/id_rsa Username:docker}
	I0816 22:23:27.991995  213866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32929 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816221528-6487/id_rsa Username:docker}
	I0816 22:23:28.006061  213866 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:23:28.006083  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:23:28.006142  213866 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210816221528-6487
	I0816 22:23:28.024841  213866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32929 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816221528-6487/id_rsa Username:docker}
	I0816 22:23:28.060997  213866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32929 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816221528-6487/id_rsa Username:docker}
	I0816 22:23:28.128450  213866 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:23:28.128477  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:23:28.128606  213866 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:23:28.128624  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:23:28.148405  213866 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:23:28.148429  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:23:28.212613  213866 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:23:28.212639  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:23:28.227353  213866 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:23:28.227378  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:23:28.229886  213866 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:23:28.231855  213866 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:23:28.231872  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:23:28.241746  213866 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:23:28.241765  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:23:28.246032  213866 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:23:28.321947  213866 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:23:28.321974  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:23:28.332441  213866 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:23:28.416715  213866 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:23:28.416743  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:23:28.418070  213866 start.go:728] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0816 22:23:28.435330  213866 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:23:28.435355  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:23:28.530604  213866 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:23:28.530683  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:23:28.624980  213866 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:23:28.625011  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:23:28.721013  213866 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:23:29.237769  213866 addons.go:313] Verifying addon metrics-server=true in "old-k8s-version-20210816221528-6487"
	I0816 22:23:26.650225  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:28.651097  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:28.569915  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:31.069733  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:29.633186  213866 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0816 22:23:29.633214  213866 addons.go:344] enableAddons completed in 1.771802579s
	I0816 22:23:29.970660  213866 node_ready.go:58] node "old-k8s-version-20210816221528-6487" has status "Ready":"False"
	I0816 22:23:32.470671  213866 node_ready.go:58] node "old-k8s-version-20210816221528-6487" has status "Ready":"False"
	I0816 22:23:31.150575  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:33.650949  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:36.015986  218005 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (40.459570687s)
	I0816 22:23:36.016042  218005 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:23:36.025388  218005 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 22:23:36.025449  218005 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:23:36.048066  218005 cri.go:76] found id: ""
	I0816 22:23:36.048116  218005 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:23:36.054724  218005 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0816 22:23:36.054776  218005 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:23:36.060924  218005 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:23:36.060960  218005 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0816 22:23:36.321958  218005 out.go:204]   - Generating certificates and keys ...
	I0816 22:23:33.070544  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:35.569576  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:34.970715  213866 node_ready.go:58] node "old-k8s-version-20210816221528-6487" has status "Ready":"False"
	I0816 22:23:37.470636  213866 node_ready.go:58] node "old-k8s-version-20210816221528-6487" has status "Ready":"False"
	I0816 22:23:36.152317  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:38.651287  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:37.198357  218005 out.go:204]   - Booting up control plane ...
	I0816 22:23:37.569853  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:40.069260  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:42.070108  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:39.970854  213866 node_ready.go:58] node "old-k8s-version-20210816221528-6487" has status "Ready":"False"
	I0816 22:23:42.470440  213866 node_ready.go:58] node "old-k8s-version-20210816221528-6487" has status "Ready":"False"
	I0816 22:23:41.152273  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:43.152413  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:44.070784  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:46.570468  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:44.471099  213866 node_ready.go:58] node "old-k8s-version-20210816221528-6487" has status "Ready":"False"
	I0816 22:23:46.970981  213866 node_ready.go:58] node "old-k8s-version-20210816221528-6487" has status "Ready":"False"
	I0816 22:23:45.650685  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:47.651865  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:50.151571  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:50.251329  218005 out.go:204]   - Configuring RBAC rules ...
	I0816 22:23:50.662704  218005 cni.go:93] Creating CNI manager for ""
	I0816 22:23:50.662740  218005 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:23:48.570517  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:51.069641  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:49.470857  213866 node_ready.go:58] node "old-k8s-version-20210816221528-6487" has status "Ready":"False"
	I0816 22:23:51.970879  213866 node_ready.go:58] node "old-k8s-version-20210816221528-6487" has status "Ready":"False"
	I0816 22:23:52.650968  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:55.151574  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:50.664647  218005 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 22:23:50.664709  218005 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0816 22:23:50.668434  218005 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl ...
	I0816 22:23:50.668452  218005 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0816 22:23:50.680365  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 22:23:50.828044  218005 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:23:50.828111  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:50.828111  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=no-preload-20210816221555-6487 minikube.k8s.io/updated_at=2021_08_16T22_23_50_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:50.929668  218005 ops.go:34] apiserver oom_adj: -16
	I0816 22:23:50.929747  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:51.484102  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:51.984386  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:52.483881  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:52.983543  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:53.484311  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:53.983847  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:54.484282  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:54.983636  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:55.484558  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:53.070159  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:55.606653  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:54.470892  213866 node_ready.go:58] node "old-k8s-version-20210816221528-6487" has status "Ready":"False"
	I0816 22:23:56.970780  213866 node_ready.go:58] node "old-k8s-version-20210816221528-6487" has status "Ready":"False"
	I0816 22:23:57.650928  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:00.151897  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:55.984526  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:56.484026  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:56.984449  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:57.483595  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:57.984552  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:58.483696  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:58.984488  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:59.484178  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:59.984161  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:24:00.483560  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:58.069766  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:00.569411  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:59.470437  213866 node_ready.go:58] node "old-k8s-version-20210816221528-6487" has status "Ready":"False"
	I0816 22:24:01.970230  213866 node_ready.go:58] node "old-k8s-version-20210816221528-6487" has status "Ready":"False"
	I0816 22:24:00.984372  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:24:01.483821  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:24:01.983750  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:24:02.483553  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:24:02.984248  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:24:03.154554  218005 kubeadm.go:985] duration metric: took 12.326511298s to wait for elevateKubeSystemPrivileges.
	I0816 22:24:03.154579  218005 kubeadm.go:392] StartCluster complete in 5m35.191699125s
	I0816 22:24:03.154598  218005 settings.go:142] acquiring lock: {Name:mk71c9e00e0a208f5191d6b85d29a074b46503a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:24:03.154685  218005 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:24:03.156374  218005 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:24:03.671880  218005 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20210816221555-6487" rescaled to 1
	I0816 22:24:03.671985  218005 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:24:03.671991  218005 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0816 22:24:03.673896  218005 out.go:177] * Verifying Kubernetes components...
	I0816 22:24:03.672162  218005 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:24:03.673971  218005 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:24:03.674016  218005 addons.go:59] Setting storage-provisioner=true in profile "no-preload-20210816221555-6487"
	I0816 22:24:03.674036  218005 addons.go:135] Setting addon storage-provisioner=true in "no-preload-20210816221555-6487"
	W0816 22:24:03.674041  218005 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:24:03.674048  218005 addons.go:59] Setting dashboard=true in profile "no-preload-20210816221555-6487"
	I0816 22:24:03.674062  218005 addons.go:59] Setting default-storageclass=true in profile "no-preload-20210816221555-6487"
	I0816 22:24:03.674078  218005 host.go:66] Checking if "no-preload-20210816221555-6487" exists ...
	I0816 22:24:03.674083  218005 addons.go:59] Setting metrics-server=true in profile "no-preload-20210816221555-6487"
	I0816 22:24:03.674090  218005 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20210816221555-6487"
	I0816 22:24:03.674101  218005 addons.go:135] Setting addon metrics-server=true in "no-preload-20210816221555-6487"
	W0816 22:24:03.674112  218005 addons.go:147] addon metrics-server should already be in state true
	I0816 22:24:03.674135  218005 host.go:66] Checking if "no-preload-20210816221555-6487" exists ...
	I0816 22:24:03.674068  218005 addons.go:135] Setting addon dashboard=true in "no-preload-20210816221555-6487"
	W0816 22:24:03.674173  218005 addons.go:147] addon dashboard should already be in state true
	I0816 22:24:03.674208  218005 host.go:66] Checking if "no-preload-20210816221555-6487" exists ...
	I0816 22:24:03.674382  218005 cli_runner.go:115] Run: docker container inspect no-preload-20210816221555-6487 --format={{.State.Status}}
	I0816 22:24:03.672248  218005 config.go:177] Loaded profile config "no-preload-20210816221555-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0816 22:24:03.674622  218005 cli_runner.go:115] Run: docker container inspect no-preload-20210816221555-6487 --format={{.State.Status}}
	I0816 22:24:03.674690  218005 cli_runner.go:115] Run: docker container inspect no-preload-20210816221555-6487 --format={{.State.Status}}
	I0816 22:24:03.674623  218005 cli_runner.go:115] Run: docker container inspect no-preload-20210816221555-6487 --format={{.State.Status}}
	I0816 22:24:03.736692  218005 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:24:03.736813  218005 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:24:03.736823  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:24:03.736874  218005 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210816221555-6487
	I0816 22:24:03.744282  218005 addons.go:135] Setting addon default-storageclass=true in "no-preload-20210816221555-6487"
	W0816 22:24:03.744310  218005 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:24:03.744350  218005 host.go:66] Checking if "no-preload-20210816221555-6487" exists ...
	I0816 22:24:03.744883  218005 cli_runner.go:115] Run: docker container inspect no-preload-20210816221555-6487 --format={{.State.Status}}
	I0816 22:24:03.748946  218005 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:24:03.750352  218005 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:24:03.750418  218005 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:24:03.750430  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:24:03.750478  218005 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210816221555-6487
	I0816 22:24:02.650397  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:04.651301  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:03.751840  218005 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:24:03.751945  218005 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:24:03.751957  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:24:03.752009  218005 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210816221555-6487
	I0816 22:24:03.755193  218005 node_ready.go:35] waiting up to 6m0s for node "no-preload-20210816221555-6487" to be "Ready" ...
	I0816 22:24:03.755358  218005 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:24:03.761862  218005 node_ready.go:49] node "no-preload-20210816221555-6487" has status "Ready":"True"
	I0816 22:24:03.761880  218005 node_ready.go:38] duration metric: took 6.660997ms waiting for node "no-preload-20210816221555-6487" to be "Ready" ...
	I0816 22:24:03.761892  218005 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:24:03.768294  218005 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-pq6qg" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:03.801770  218005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32934 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816221555-6487/id_rsa Username:docker}
	I0816 22:24:03.802512  218005 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:24:03.802535  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:24:03.802590  218005 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210816221555-6487
	I0816 22:24:03.809212  218005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32934 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816221555-6487/id_rsa Username:docker}
	I0816 22:24:03.812103  218005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32934 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816221555-6487/id_rsa Username:docker}
	I0816 22:24:03.857686  218005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32934 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816221555-6487/id_rsa Username:docker}
	I0816 22:24:04.027691  218005 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:24:04.027719  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:24:04.029430  218005 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:24:04.029747  218005 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:24:04.029767  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:24:04.043293  218005 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:24:04.043322  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:24:04.112589  218005 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:24:04.112614  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:24:04.136322  218005 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:24:04.136350  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:24:04.221098  218005 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:24:04.221124  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:24:04.225575  218005 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:24:04.234235  218005 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:24:04.236543  218005 start.go:728] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0816 22:24:04.240565  218005 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:24:04.240591  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:24:04.332986  218005 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:24:04.333013  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:24:04.512532  218005 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:24:04.512619  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:24:04.614597  218005 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:24:04.614623  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:24:04.639772  218005 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:24:04.639802  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:24:04.732668  218005 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:24:04.732700  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:24:04.830142  218005 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:24:05.440517  218005 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.214895018s)
	I0816 22:24:05.440580  218005 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.20631443s)
	I0816 22:24:05.440620  218005 addons.go:313] Verifying addon metrics-server=true in "no-preload-20210816221555-6487"
	I0816 22:24:05.914058  218005 pod_ready.go:102] pod "coredns-78fcd69978-pq6qg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:06.519460  218005 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.68926584s)
	I0816 22:24:02.571679  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:05.070253  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:04.470734  213866 node_ready.go:58] node "old-k8s-version-20210816221528-6487" has status "Ready":"False"
	I0816 22:24:06.470971  213866 node_ready.go:49] node "old-k8s-version-20210816221528-6487" has status "Ready":"True"
	I0816 22:24:06.471000  213866 node_ready.go:38] duration metric: took 38.506620086s waiting for node "old-k8s-version-20210816221528-6487" to be "Ready" ...
	I0816 22:24:06.471013  213866 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:24:06.474131  213866 pod_ready.go:78] waiting up to 6m0s for pod "coredns-fb8b8dccf-7z27q" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:08.523994  213866 pod_ready.go:102] pod "coredns-fb8b8dccf-7z27q" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:06.651526  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:08.651579  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:06.521319  218005 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0816 22:24:06.521345  218005 addons.go:344] enableAddons completed in 2.849194274s
	I0816 22:24:08.280217  218005 pod_ready.go:102] pod "coredns-78fcd69978-pq6qg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:09.277642  218005 pod_ready.go:97] error getting pod "coredns-78fcd69978-pq6qg" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-pq6qg" not found
	I0816 22:24:09.277677  218005 pod_ready.go:81] duration metric: took 5.509353036s waiting for pod "coredns-78fcd69978-pq6qg" in "kube-system" namespace to be "Ready" ...
	E0816 22:24:09.277690  218005 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-78fcd69978-pq6qg" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-pq6qg" not found
	I0816 22:24:09.277699  218005 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-zmc4x" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:09.824157  218005 pod_ready.go:92] pod "coredns-78fcd69978-zmc4x" in "kube-system" namespace has status "Ready":"True"
	I0816 22:24:09.824190  218005 pod_ready.go:81] duration metric: took 546.47538ms waiting for pod "coredns-78fcd69978-zmc4x" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:09.824204  218005 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20210816221555-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:10.838486  218005 pod_ready.go:92] pod "etcd-no-preload-20210816221555-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:24:10.838510  218005 pod_ready.go:81] duration metric: took 1.014297972s waiting for pod "etcd-no-preload-20210816221555-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:10.838528  218005 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20210816221555-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:10.844445  218005 pod_ready.go:92] pod "kube-apiserver-no-preload-20210816221555-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:24:10.844470  218005 pod_ready.go:81] duration metric: took 5.932696ms waiting for pod "kube-apiserver-no-preload-20210816221555-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:10.844485  218005 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20210816221555-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:10.849958  218005 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210816221555-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:24:10.849978  218005 pod_ready.go:81] duration metric: took 5.485285ms waiting for pod "kube-controller-manager-no-preload-20210816221555-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:10.849991  218005 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-82g44" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:10.859397  218005 pod_ready.go:92] pod "kube-proxy-82g44" in "kube-system" namespace has status "Ready":"True"
	I0816 22:24:10.859417  218005 pod_ready.go:81] duration metric: took 9.418559ms waiting for pod "kube-proxy-82g44" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:10.859429  218005 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20210816221555-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:11.079662  218005 pod_ready.go:92] pod "kube-scheduler-no-preload-20210816221555-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:24:11.079685  218005 pod_ready.go:81] duration metric: took 220.246797ms waiting for pod "kube-scheduler-no-preload-20210816221555-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:11.079695  218005 pod_ready.go:38] duration metric: took 7.317786525s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:24:11.079716  218005 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:24:11.079760  218005 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:24:11.138671  218005 api_server.go:70] duration metric: took 7.466643672s to wait for apiserver process to appear ...
	I0816 22:24:11.138701  218005 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:24:11.138714  218005 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:24:11.144121  218005 api_server.go:265] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0816 22:24:11.145010  218005 api_server.go:139] control plane version: v1.22.0-rc.0
	I0816 22:24:11.145031  218005 api_server.go:129] duration metric: took 6.323339ms to wait for apiserver health ...
	I0816 22:24:11.145040  218005 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:24:11.281080  218005 system_pods.go:59] 9 kube-system pods found
	I0816 22:24:11.281116  218005 system_pods.go:61] "coredns-78fcd69978-zmc4x" [1fc66fbb-952d-43b5-af77-f7551a8ed70e] Running
	I0816 22:24:11.281123  218005 system_pods.go:61] "etcd-no-preload-20210816221555-6487" [43927863-2c25-418d-a2a0-af7a6c1c475d] Running
	I0816 22:24:11.281130  218005 system_pods.go:61] "kindnet-pz7lz" [0e675e1e-c1e4-4ed5-b148-1b64d0933e1d] Running
	I0816 22:24:11.281136  218005 system_pods.go:61] "kube-apiserver-no-preload-20210816221555-6487" [1092f52d-df2b-42a1-850b-93c80e4f8146] Running
	I0816 22:24:11.281142  218005 system_pods.go:61] "kube-controller-manager-no-preload-20210816221555-6487" [0f84e622-668e-4d39-a6f9-d165fe87089e] Running
	I0816 22:24:11.281147  218005 system_pods.go:61] "kube-proxy-82g44" [80dd61db-1545-4dc7-bd88-00ae47943849] Running
	I0816 22:24:11.281154  218005 system_pods.go:61] "kube-scheduler-no-preload-20210816221555-6487" [8a491c0e-1fdf-4a83-a89f-5d5497f54377] Running
	I0816 22:24:11.281166  218005 system_pods.go:61] "metrics-server-7c784ccb57-b466w" [4161efc3-7c01-456e-b9d5-6c09ca70c1f9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:24:11.281177  218005 system_pods.go:61] "storage-provisioner" [40ef7855-e8e1-4106-9694-5bee902ec410] Running
	I0816 22:24:11.281186  218005 system_pods.go:74] duration metric: took 136.138888ms to wait for pod list to return data ...
	I0816 22:24:11.281198  218005 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:24:11.479313  218005 default_sa.go:45] found service account: "default"
	I0816 22:24:11.479340  218005 default_sa.go:55] duration metric: took 198.135098ms for default service account to be created ...
	I0816 22:24:11.479350  218005 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:24:11.682753  218005 system_pods.go:86] 9 kube-system pods found
	I0816 22:24:11.682787  218005 system_pods.go:89] "coredns-78fcd69978-zmc4x" [1fc66fbb-952d-43b5-af77-f7551a8ed70e] Running
	I0816 22:24:11.682795  218005 system_pods.go:89] "etcd-no-preload-20210816221555-6487" [43927863-2c25-418d-a2a0-af7a6c1c475d] Running
	I0816 22:24:11.682813  218005 system_pods.go:89] "kindnet-pz7lz" [0e675e1e-c1e4-4ed5-b148-1b64d0933e1d] Running
	I0816 22:24:11.682821  218005 system_pods.go:89] "kube-apiserver-no-preload-20210816221555-6487" [1092f52d-df2b-42a1-850b-93c80e4f8146] Running
	I0816 22:24:11.682829  218005 system_pods.go:89] "kube-controller-manager-no-preload-20210816221555-6487" [0f84e622-668e-4d39-a6f9-d165fe87089e] Running
	I0816 22:24:11.682835  218005 system_pods.go:89] "kube-proxy-82g44" [80dd61db-1545-4dc7-bd88-00ae47943849] Running
	I0816 22:24:11.682842  218005 system_pods.go:89] "kube-scheduler-no-preload-20210816221555-6487" [8a491c0e-1fdf-4a83-a89f-5d5497f54377] Running
	I0816 22:24:11.682860  218005 system_pods.go:89] "metrics-server-7c784ccb57-b466w" [4161efc3-7c01-456e-b9d5-6c09ca70c1f9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:24:11.682873  218005 system_pods.go:89] "storage-provisioner" [40ef7855-e8e1-4106-9694-5bee902ec410] Running
	I0816 22:24:11.682882  218005 system_pods.go:126] duration metric: took 203.52527ms to wait for k8s-apps to be running ...
	I0816 22:24:11.682895  218005 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:24:11.682942  218005 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:24:11.720433  218005 system_svc.go:56] duration metric: took 37.530414ms WaitForService to wait for kubelet.
	I0816 22:24:11.720464  218005 kubeadm.go:547] duration metric: took 8.048442729s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:24:11.720494  218005 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:24:11.878128  218005 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:24:11.878155  218005 node_conditions.go:123] node cpu capacity is 8
	I0816 22:24:11.878169  218005 node_conditions.go:105] duration metric: took 157.669186ms to run NodePressure ...
	I0816 22:24:11.878180  218005 start.go:231] waiting for startup goroutines ...
	I0816 22:24:11.931309  218005 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0816 22:24:11.933325  218005 out.go:177] 
	W0816 22:24:11.933509  218005 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0816 22:24:11.934987  218005 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0816 22:24:11.936447  218005 out.go:177] * Done! kubectl is now configured to use "no-preload-20210816221555-6487" cluster and "default" namespace by default
	I0816 22:24:07.570467  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:09.570953  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:11.577640  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:10.981764  213866 pod_ready.go:102] pod "coredns-fb8b8dccf-7z27q" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:13.480977  213866 pod_ready.go:102] pod "coredns-fb8b8dccf-7z27q" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:11.153078  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:13.651464  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:14.071289  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:16.570307  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:15.980670  213866 pod_ready.go:102] pod "coredns-fb8b8dccf-7z27q" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:17.980751  213866 pod_ready.go:102] pod "coredns-fb8b8dccf-7z27q" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:16.151684  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:18.651272  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:19.070575  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:21.569636  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:19.981831  213866 pod_ready.go:102] pod "coredns-fb8b8dccf-7z27q" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:22.481224  213866 pod_ready.go:102] pod "coredns-fb8b8dccf-7z27q" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:23.480538  213866 pod_ready.go:92] pod "coredns-fb8b8dccf-7z27q" in "kube-system" namespace has status "Ready":"True"
	I0816 22:24:23.480578  213866 pod_ready.go:81] duration metric: took 17.00642622s waiting for pod "coredns-fb8b8dccf-7z27q" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:23.480592  213866 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-20210816221528-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:23.484151  213866 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-20210816221528-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:24:23.484166  213866 pod_ready.go:81] duration metric: took 3.564442ms waiting for pod "kube-controller-manager-old-k8s-version-20210816221528-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:23.484176  213866 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9w5rw" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:23.487349  213866 pod_ready.go:92] pod "kube-proxy-9w5rw" in "kube-system" namespace has status "Ready":"True"
	I0816 22:24:23.487362  213866 pod_ready.go:81] duration metric: took 3.179929ms waiting for pod "kube-proxy-9w5rw" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:23.487370  213866 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:21.151011  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:23.151110  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:25.151266  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:24.069317  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:26.069716  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:25.495411  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:27.495624  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Mon 2021-08-16 22:18:22 UTC, end at Mon 2021-08-16 22:24:29 UTC. --
	Aug 16 22:24:12 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:12.917046259Z" level=info msg="Starting container: 72c39da7e58ac47e06b5da8502c699b70cc8cc4c51a04729d8b0dd1fa692c90a" id=b180841a-def6-4f50-8195-77c2912b1592 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:24:12 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:12.925119819Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6\""
	Aug 16 22:24:12 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:12.944251948Z" level=info msg="Started container 72c39da7e58ac47e06b5da8502c699b70cc8cc4c51a04729d8b0dd1fa692c90a: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-9ndkn/dashboard-metrics-scraper" id=b180841a-def6-4f50-8195-77c2912b1592 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:24:13 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:13.775404968Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=9dcf1e44-3402-4a2e-9b54-99ce62ef81b4 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:24:13 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:13.776867007Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=9dcf1e44-3402-4a2e-9b54-99ce62ef81b4 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:24:13 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:13.777419165Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=4a6eaa66-a3d2-4d6f-aab9-bce2aaeacb38 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:24:13 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:13.779215665Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=4a6eaa66-a3d2-4d6f-aab9-bce2aaeacb38 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:24:13 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:13.780008025Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-9ndkn/dashboard-metrics-scraper" id=959da50f-9b05-4360-8567-bbc69e0d4780 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:24:14 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:14.045478122Z" level=info msg="Created container dd8fdf37b52d4120ed233eea8cefb706b1179849b55c8f08029c1420d88e0310: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-9ndkn/dashboard-metrics-scraper" id=959da50f-9b05-4360-8567-bbc69e0d4780 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:24:14 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:14.046025171Z" level=info msg="Starting container: dd8fdf37b52d4120ed233eea8cefb706b1179849b55c8f08029c1420d88e0310" id=481ef33f-e430-4e81-949f-ff4c7fac0f00 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:24:14 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:14.070205204Z" level=info msg="Started container dd8fdf37b52d4120ed233eea8cefb706b1179849b55c8f08029c1420d88e0310: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-9ndkn/dashboard-metrics-scraper" id=481ef33f-e430-4e81-949f-ff4c7fac0f00 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:24:14 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:14.779075090Z" level=info msg="Removing container: 72c39da7e58ac47e06b5da8502c699b70cc8cc4c51a04729d8b0dd1fa692c90a" id=34e063b5-ea66-4729-a886-06cb979698fa name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 16 22:24:14 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:14.817302142Z" level=info msg="Removed container 72c39da7e58ac47e06b5da8502c699b70cc8cc4c51a04729d8b0dd1fa692c90a: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-9ndkn/dashboard-metrics-scraper" id=34e063b5-ea66-4729-a886-06cb979698fa name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 16 22:24:15 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:15.124090294Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f" id=63807842-46e2-4e08-be66-ead0fe3759c4 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 16 22:24:15 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:15.124813846Z" level=info msg="Checking image status: kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6" id=bc4fd296-e7a8-419d-8c61-2ce1cf80b966 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:24:15 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:15.125548140Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,RepoTags:[docker.io/kubernetesui/dashboard:v2.1.0],RepoDigests:[docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 docker.io/kubernetesui/dashboard@sha256:8cd877c1c0909bdd50043edc18b89cfbbf0614a57893ebf59b6bd1ddb5419323],Size_:228529574,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=bc4fd296-e7a8-419d-8c61-2ce1cf80b966 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:24:15 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:15.126354416Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-v5svh/kubernetes-dashboard" id=85543814-c253-4c18-860c-2bec804632b3 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:24:15 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:15.138353366Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/95aa46e0eac4db6daa4228516237d20080904e346393968b05f26fdd79d26dd8/merged/etc/group: no such file or directory"
	Aug 16 22:24:15 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:15.280472912Z" level=info msg="Created container 17b741f1cf88828974c837b2de6dc1a07c30943a1b7fc0823246d4f3ece9069c: kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-v5svh/kubernetes-dashboard" id=85543814-c253-4c18-860c-2bec804632b3 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:24:15 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:15.280949622Z" level=info msg="Starting container: 17b741f1cf88828974c837b2de6dc1a07c30943a1b7fc0823246d4f3ece9069c" id=97b547a4-b4bc-44a7-8dc4-5d704a05015d name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:24:15 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:15.290668631Z" level=info msg="Started container 17b741f1cf88828974c837b2de6dc1a07c30943a1b7fc0823246d4f3ece9069c: kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-v5svh/kubernetes-dashboard" id=97b547a4-b4bc-44a7-8dc4-5d704a05015d name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:24:17 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:17.638042079Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=b6fcf088-1667-43f4-a1c9-b79df0a6d050 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:24:17 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:17.638323260Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=b6fcf088-1667-43f4-a1c9-b79df0a6d050 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:24:17 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:17.638785902Z" level=info msg="Pulling image: fake.domain/k8s.gcr.io/echoserver:1.4" id=97ab51d1-d3e5-4d91-8d6a-5f680867434f name=/runtime.v1alpha2.ImageService/PullImage
	Aug 16 22:24:17 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:17.648136288Z" level=info msg="Trying to access \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID
	17b741f1cf888       docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f   14 seconds ago      Running             kubernetes-dashboard        0                   1f01bfe7457f4
	dd8fdf37b52d4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   1                   a045f061f4245
	03ab9f1a46282       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         0                   d731ce911be4f
	7f16dde1fc9b1       8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44                                           23 seconds ago      Running             coredns                     0                   2e113946c5232
	4499305eb0f7f       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb                                           24 seconds ago      Running             kindnet-cni                 0                   61aa9e31874d4
	e919cbdfb2443       ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c                                           25 seconds ago      Running             kube-proxy                  0                   692901f61961d
	43dbac811c6be       7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75                                           45 seconds ago      Running             kube-scheduler              2                   08b2a93f79987
	3b6e550318532       cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c                                           45 seconds ago      Running             kube-controller-manager     2                   b475757a12a42
	a5dbf4c341ee4       b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a                                           45 seconds ago      Running             kube-apiserver              2                   9518e23dd5b9b
	5f39439703b10       0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba                                           45 seconds ago      Running             etcd                        2                   4983a9bf59823
	
	* 
	* ==> coredns [7f16dde1fc9b15e0c5a936ad881565e64ec0797e071ca0f3615f75be1d7a7ba5] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20210816221555-6487
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20210816221555-6487
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
	                    minikube.k8s.io/name=no-preload-20210816221555-6487
	                    minikube.k8s.io/updated_at=2021_08_16T22_23_50_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Aug 2021 22:23:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20210816221555-6487
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Aug 2021 22:24:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Aug 2021 22:24:25 +0000   Mon, 16 Aug 2021 22:23:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Aug 2021 22:24:25 +0000   Mon, 16 Aug 2021 22:23:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Aug 2021 22:24:25 +0000   Mon, 16 Aug 2021 22:23:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Aug 2021 22:24:25 +0000   Mon, 16 Aug 2021 22:24:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    no-preload-20210816221555-6487
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                caddf44c-6818-4116-a33d-8b1403a4962e
	  Boot ID:                    fb7b5690-fedc-46af-96ea-1f6e59faa09d
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.22.0-rc.0
	  Kube-Proxy Version:         v1.22.0-rc.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-78fcd69978-zmc4x                                  100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     26s
	  kube-system                 etcd-no-preload-20210816221555-6487                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         33s
	  kube-system                 kindnet-pz7lz                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      27s
	  kube-system                 kube-apiserver-no-preload-20210816221555-6487             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  kube-system                 kube-controller-manager-no-preload-20210816221555-6487    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  kube-system                 kube-proxy-82g44                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kube-system                 kube-scheduler-no-preload-20210816221555-6487             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  kube-system                 metrics-server-7c784ccb57-b466w                           100m (1%!)(MISSING)     0 (0%!)(MISSING)      300Mi (0%!)(MISSING)       0 (0%!)(MISSING)         24s
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  kubernetes-dashboard        dashboard-metrics-scraper-8685c45546-9ndkn                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kubernetes-dashboard        kubernetes-dashboard-6fcdf4f6d-v5svh                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             520Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 47s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  47s (x4 over 47s)  kubelet  Node no-preload-20210816221555-6487 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s (x4 over 47s)  kubelet  Node no-preload-20210816221555-6487 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s (x4 over 47s)  kubelet  Node no-preload-20210816221555-6487 status is now: NodeHasSufficientPID
	  Normal  Starting                 34s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s                kubelet  Node no-preload-20210816221555-6487 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s                kubelet  Node no-preload-20210816221555-6487 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s                kubelet  Node no-preload-20210816221555-6487 status is now: NodeHasSufficientPID
	  Normal  NodeReady                27s                kubelet  Node no-preload-20210816221555-6487 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000004] ll header: 00000000: 02 42 d5 d5 90 49 02 42 c0 a8 3a 02 08 00        .B...I.B..:...
	[  +2.050041] IPv4: martian source 10.244.0.4 from 10.96.0.1, on dev br-98b3ee991257
	[  +0.000002] ll header: 00000000: 02 42 b1 3b 84 51 02 42 c0 a8 43 02 08 00        .B.;.Q.B..C...
	[ +12.284935] IPv4: martian source 10.244.0.2 from 10.96.0.1, on dev br-4ed2783b447d
	[  +0.000002] ll header: 00000000: 02 42 d5 d5 90 49 02 42 c0 a8 3a 02 08 00        .B...I.B..:...
	[Aug16 22:24] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth56124561
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff be 79 e6 a5 9f 5c 08 06        .......y...\..
	[  +0.399250] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev veth4983c387
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff fa 17 8f 85 e1 65 08 06        ...........e..
	[  +1.600259] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethb3548ee1
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff fa 2f f8 2f f9 94 08 06        ......././....
	[  +0.559633] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev veth46787027
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 62 c0 9d 8e b4 af 08 06        ......b.......
	[  +0.039851] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethf7560af3
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff f2 14 37 d6 50 f8 08 06        ........7.P...
	[  +0.363946] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethf46edfa1
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 d9 41 00 33 05 08 06        ........A.3...
	[  +0.104031] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev vethfcc96108
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ba 37 d4 b4 15 b7 08 06        .......7......
	[  +0.594832] IPv4: martian source 10.244.0.4 from 10.96.0.1, on dev br-98b3ee991257
	[  +0.000002] ll header: 00000000: 02 42 b1 3b 84 51 02 42 c0 a8 43 02 08 00        .B.;.Q.B..C...
	[  +0.885371] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethb5a2c402
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ea d4 10 cd 8e c5 08 06        ..............
	[  +0.101530] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth85ef4cf2
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 32 50 31 fa e3 5b 08 06        ......2P1..[..
	
	* 
	* ==> etcd [5f39439703b1098bcb2c996d36f81db6b863d3bc7dda1a069509a87e2ff0a3b1] <==
	* {"level":"info","ts":"2021-08-16T22:23:43.913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2021-08-16T22:23:43.914Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2021-08-16T22:23:43.916Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-08-16T22:23:43.916Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2021-08-16T22:23:43.916Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2021-08-16T22:23:43.916Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-08-16T22:23:43.916Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-08-16T22:23:44.337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2021-08-16T22:23:44.337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2021-08-16T22:23:44.337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2021-08-16T22:23:44.337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2021-08-16T22:23:44.337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2021-08-16T22:23:44.337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2021-08-16T22:23:44.337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2021-08-16T22:23:44.337Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-16T22:23:44.338Z","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-16T22:23:44.338Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-16T22:23:44.339Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-16T22:23:44.339Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-16T22:23:44.339Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:no-preload-20210816221555-6487 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2021-08-16T22:23:44.339Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-16T22:23:44.339Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-08-16T22:23:44.339Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-08-16T22:23:44.340Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2021-08-16T22:23:44.340Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  22:24:29 up  1:03,  0 users,  load average: 1.15, 2.13, 2.12
	Linux no-preload-20210816221555-6487 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [a5dbf4c341ee400d40847a5f8d81d87f6ae62104bf8f40f5c10b88c4913deb64] <==
	* I0816 22:23:47.427127       1 apf_controller.go:304] Running API Priority and Fairness config worker
	I0816 22:23:47.427493       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0816 22:23:47.428837       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0816 22:23:47.432459       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0816 22:23:47.435196       1 controller.go:611] quota admission added evaluator for: namespaces
	I0816 22:23:48.325444       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0816 22:23:48.325470       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0816 22:23:48.333617       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0816 22:23:48.336304       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0816 22:23:48.336326       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0816 22:23:48.673414       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 22:23:48.716181       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0816 22:23:48.834775       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0816 22:23:48.835511       1 controller.go:611] quota admission added evaluator for: endpoints
	I0816 22:23:48.838634       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0816 22:23:49.368623       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0816 22:23:50.424799       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0816 22:23:50.457048       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0816 22:23:55.619243       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 22:24:02.873090       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0816 22:24:02.922366       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	W0816 22:24:07.935138       1 handler_proxy.go:104] no RequestInfo found in the context
	E0816 22:24:07.935229       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0816 22:24:07.935245       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [3b6e5503185321aa0b9f2a8dd00d97f87a6d5995e4ead91d0eb34f104511e1c1] <==
	* I0816 22:24:05.119447       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0816 22:24:05.133798       1 replica_set.go:536] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0816 22:24:05.232522       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-b466w"
	I0816 22:24:05.814201       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0816 22:24:05.924559       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:24:06.015140       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:24:06.015521       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	E0816 22:24:06.024070       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:24:06.024233       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:24:06.025625       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:24:06.030476       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:24:06.030549       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:24:06.032073       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:24:06.037969       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:24:06.038204       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:24:06.038230       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:24:06.038244       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:24:06.116752       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:24:06.116765       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:24:06.126219       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:24:06.126966       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:24:06.212510       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:24:06.212628       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:24:06.222346       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-9ndkn"
	I0816 22:24:06.322689       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-v5svh"
	
	* 
	* ==> kube-proxy [e919cbdfb244328186a80d3e1a9645c58a3901f4e9bbfcf04ae307ef0d568d5c] <==
	* I0816 22:24:04.125483       1 node.go:172] Successfully retrieved node IP: 192.168.67.2
	I0816 22:24:04.125545       1 server_others.go:140] Detected node IP 192.168.67.2
	W0816 22:24:04.125576       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0816 22:24:04.239777       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0816 22:24:04.239817       1 server_others.go:212] Using iptables Proxier.
	I0816 22:24:04.239831       1 server_others.go:219] creating dualStackProxier for iptables.
	W0816 22:24:04.239848       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0816 22:24:04.240307       1 server.go:649] Version: v1.22.0-rc.0
	I0816 22:24:04.243470       1 config.go:315] Starting service config controller
	I0816 22:24:04.243495       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:24:04.243513       1 config.go:224] Starting endpoint slice config controller
	I0816 22:24:04.243517       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	E0816 22:24:04.320692       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"no-preload-20210816221555-6487.169be9b2c501819d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03ed7410e7f54ae, ext:327495362, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-no-preload-20210816221555-6487", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Name:
"no-preload-20210816221555-6487", UID:"no-preload-20210816221555-6487", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "no-preload-20210816221555-6487.169be9b2c501819d" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0816 22:24:04.343806       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0816 22:24:04.343865       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [43dbac811c6beba2363524514bdb89ffacc43063e93d24959fe2698b532d9852] <==
	* W0816 22:23:47.349574       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 22:23:47.431614       1 secure_serving.go:195] Serving securely on 127.0.0.1:10259
	I0816 22:23:47.431707       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 22:23:47.431738       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 22:23:47.431760       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0816 22:23:47.433230       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:23:47.433308       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 22:23:47.434256       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 22:23:47.434488       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:23:47.435645       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:47.436095       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:23:47.436208       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:47.436294       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:23:47.436366       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:47.436444       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:23:47.436512       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:23:47.436592       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:23:47.436663       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:47.436749       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:23:47.436886       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:23:48.350685       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 22:23:48.377694       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:48.385659       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:23:48.492379       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0816 22:23:51.532734       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:18:22 UTC, end at Mon 2021-08-16 22:24:29 UTC. --
	Aug 16 22:24:06 no-preload-20210816221555-6487 kubelet[4408]: I0816 22:24:06.752546    4408 scope.go:110] "RemoveContainer" containerID="d75e996c27de0384d75e01b0a14e81df310c962987f7b73fdc18bb5e107b36d3"
	Aug 16 22:24:06 no-preload-20210816221555-6487 kubelet[4408]: E0816 22:24:06.753087    4408 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d75e996c27de0384d75e01b0a14e81df310c962987f7b73fdc18bb5e107b36d3\": container with ID starting with d75e996c27de0384d75e01b0a14e81df310c962987f7b73fdc18bb5e107b36d3 not found: ID does not exist" containerID="d75e996c27de0384d75e01b0a14e81df310c962987f7b73fdc18bb5e107b36d3"
	Aug 16 22:24:06 no-preload-20210816221555-6487 kubelet[4408]: I0816 22:24:06.753143    4408 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:d75e996c27de0384d75e01b0a14e81df310c962987f7b73fdc18bb5e107b36d3} err="failed to get container status \"d75e996c27de0384d75e01b0a14e81df310c962987f7b73fdc18bb5e107b36d3\": rpc error: code = NotFound desc = could not find container \"d75e996c27de0384d75e01b0a14e81df310c962987f7b73fdc18bb5e107b36d3\": container with ID starting with d75e996c27de0384d75e01b0a14e81df310c962987f7b73fdc18bb5e107b36d3 not found: ID does not exist"
	Aug 16 22:24:06 no-preload-20210816221555-6487 kubelet[4408]: I0816 22:24:06.812212    4408 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19540592-5a1c-41ae-bf8a-67a910086cad-kube-api-access-8pj5s" (OuterVolumeSpecName: "kube-api-access-8pj5s") pod "19540592-5a1c-41ae-bf8a-67a910086cad" (UID: "19540592-5a1c-41ae-bf8a-67a910086cad"). InnerVolumeSpecName "kube-api-access-8pj5s". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 16 22:24:06 no-preload-20210816221555-6487 kubelet[4408]: I0816 22:24:06.831959    4408 reconciler.go:319] "Volume detached for volume \"kube-api-access-8pj5s\" (UniqueName: \"kubernetes.io/projected/19540592-5a1c-41ae-bf8a-67a910086cad-kube-api-access-8pj5s\") on node \"no-preload-20210816221555-6487\" DevicePath \"\""
	Aug 16 22:24:06 no-preload-20210816221555-6487 kubelet[4408]: I0816 22:24:06.832004    4408 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19540592-5a1c-41ae-bf8a-67a910086cad-config-volume\") on node \"no-preload-20210816221555-6487\" DevicePath \"\""
	Aug 16 22:24:09 no-preload-20210816221555-6487 kubelet[4408]: I0816 22:24:09.638437    4408 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=19540592-5a1c-41ae-bf8a-67a910086cad path="/var/lib/kubelet/pods/19540592-5a1c-41ae-bf8a-67a910086cad/volumes"
	Aug 16 22:24:13 no-preload-20210816221555-6487 kubelet[4408]: I0816 22:24:13.774849    4408 scope.go:110] "RemoveContainer" containerID="72c39da7e58ac47e06b5da8502c699b70cc8cc4c51a04729d8b0dd1fa692c90a"
	Aug 16 22:24:14 no-preload-20210816221555-6487 kubelet[4408]: I0816 22:24:14.778178    4408 scope.go:110] "RemoveContainer" containerID="72c39da7e58ac47e06b5da8502c699b70cc8cc4c51a04729d8b0dd1fa692c90a"
	Aug 16 22:24:14 no-preload-20210816221555-6487 kubelet[4408]: I0816 22:24:14.778336    4408 scope.go:110] "RemoveContainer" containerID="dd8fdf37b52d4120ed233eea8cefb706b1179849b55c8f08029c1420d88e0310"
	Aug 16 22:24:14 no-preload-20210816221555-6487 kubelet[4408]: E0816 22:24:14.778696    4408 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-9ndkn_kubernetes-dashboard(0220ee63-e330-4c8e-a161-dda26dae3ebb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-9ndkn" podUID=0220ee63-e330-4c8e-a161-dda26dae3ebb
	Aug 16 22:24:15 no-preload-20210816221555-6487 kubelet[4408]: I0816 22:24:15.782121    4408 scope.go:110] "RemoveContainer" containerID="dd8fdf37b52d4120ed233eea8cefb706b1179849b55c8f08029c1420d88e0310"
	Aug 16 22:24:15 no-preload-20210816221555-6487 kubelet[4408]: E0816 22:24:15.782340    4408 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-9ndkn_kubernetes-dashboard(0220ee63-e330-4c8e-a161-dda26dae3ebb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-9ndkn" podUID=0220ee63-e330-4c8e-a161-dda26dae3ebb
	Aug 16 22:24:16 no-preload-20210816221555-6487 kubelet[4408]: E0816 22:24:16.038831    4408 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/65a501908096112bb2675466c1dd3aa24ab71cbe70391ffecee870c2dc6b0aa2/docker/65a501908096112bb2675466c1dd3aa24ab71cbe70391ffecee870c2dc6b0aa2\": RecentStats: unable to find data in memory cache], [\"/system.slice/crio-e919cbdfb244328186a80d3e1a9645c58a3901f4e9bbfcf04ae307ef0d568d5c.scope\": RecentStats: unable to find data in memory cache]"
	Aug 16 22:24:16 no-preload-20210816221555-6487 kubelet[4408]: I0816 22:24:16.783790    4408 scope.go:110] "RemoveContainer" containerID="dd8fdf37b52d4120ed233eea8cefb706b1179849b55c8f08029c1420d88e0310"
	Aug 16 22:24:16 no-preload-20210816221555-6487 kubelet[4408]: E0816 22:24:16.784062    4408 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-9ndkn_kubernetes-dashboard(0220ee63-e330-4c8e-a161-dda26dae3ebb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-9ndkn" podUID=0220ee63-e330-4c8e-a161-dda26dae3ebb
	Aug 16 22:24:17 no-preload-20210816221555-6487 kubelet[4408]: E0816 22:24:17.653025    4408 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:24:17 no-preload-20210816221555-6487 kubelet[4408]: E0816 22:24:17.653075    4408 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:24:17 no-preload-20210816221555-6487 kubelet[4408]: E0816 22:24:17.653231    4408 kuberuntime_manager.go:895] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-kg478,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{
Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]Vol
umeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-b466w_kube-system(4161efc3-7c01-456e-b9d5-6c09ca70c1f9): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
	Aug 16 22:24:17 no-preload-20210816221555-6487 kubelet[4408]: E0816 22:24:17.653281    4408 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-b466w" podUID=4161efc3-7c01-456e-b9d5-6c09ca70c1f9
	Aug 16 22:24:26 no-preload-20210816221555-6487 kubelet[4408]: E0816 22:24:26.064159    4408 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/65a501908096112bb2675466c1dd3aa24ab71cbe70391ffecee870c2dc6b0aa2/docker/65a501908096112bb2675466c1dd3aa24ab71cbe70391ffecee870c2dc6b0aa2\": RecentStats: unable to find data in memory cache]"
	Aug 16 22:24:27 no-preload-20210816221555-6487 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 16 22:24:27 no-preload-20210816221555-6487 kubelet[4408]: I0816 22:24:27.064587    4408 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Aug 16 22:24:27 no-preload-20210816221555-6487 systemd[1]: kubelet.service: Succeeded.
	Aug 16 22:24:27 no-preload-20210816221555-6487 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [17b741f1cf88828974c837b2de6dc1a07c30943a1b7fc0823246d4f3ece9069c] <==
	* 2021/08/16 22:24:15 Using namespace: kubernetes-dashboard
	2021/08/16 22:24:15 Using in-cluster config to connect to apiserver
	2021/08/16 22:24:15 Using secret token for csrf signing
	2021/08/16 22:24:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/16 22:24:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/16 22:24:15 Successful initial request to the apiserver, version: v1.22.0-rc.0
	2021/08/16 22:24:15 Generating JWE encryption key
	2021/08/16 22:24:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/16 22:24:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/16 22:24:15 Initializing JWE encryption key from synchronized object
	2021/08/16 22:24:15 Creating in-cluster Sidecar client
	2021/08/16 22:24:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:24:15 Serving insecurely on HTTP port: 9090
	2021/08/16 22:24:15 Starting overwatch
	
	* 
	* ==> storage-provisioner [03ab9f1a4628206cf1e1ca0b6d15e457fa8e4988879154f8fb91512b2a4e77c6] <==
	* I0816 22:24:06.746837       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 22:24:06.757243       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 22:24:06.757285       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 22:24:06.821358       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 22:24:06.821520       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"058f8982-ac05-44d6-bc85-3a80d87b7013", APIVersion:"v1", ResourceVersion:"595", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-20210816221555-6487_f3ee5552-f11a-48aa-86ba-0b1889bd4f26 became leader
	I0816 22:24:06.821569       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-20210816221555-6487_f3ee5552-f11a-48aa-86ba-0b1889bd4f26!
	I0816 22:24:06.921969       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-20210816221555-6487_f3ee5552-f11a-48aa-86ba-0b1889bd4f26!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210816221555-6487 -n no-preload-20210816221555-6487
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210816221555-6487 -n no-preload-20210816221555-6487: exit status 2 (332.626132ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context no-preload-20210816221555-6487 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-7c784ccb57-b466w
helpers_test.go:273: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context no-preload-20210816221555-6487 describe pod metrics-server-7c784ccb57-b466w
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context no-preload-20210816221555-6487 describe pod metrics-server-7c784ccb57-b466w: exit status 1 (63.158797ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-7c784ccb57-b466w" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context no-preload-20210816221555-6487 describe pod metrics-server-7c784ccb57-b466w: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect no-preload-20210816221555-6487
helpers_test.go:236: (dbg) docker inspect no-preload-20210816221555-6487:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "65a501908096112bb2675466c1dd3aa24ab71cbe70391ffecee870c2dc6b0aa2",
	        "Created": "2021-08-16T22:15:57.474130005Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 218315,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-16T22:18:22.213784577Z",
	            "FinishedAt": "2021-08-16T22:18:19.937832982Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/65a501908096112bb2675466c1dd3aa24ab71cbe70391ffecee870c2dc6b0aa2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/65a501908096112bb2675466c1dd3aa24ab71cbe70391ffecee870c2dc6b0aa2/hostname",
	        "HostsPath": "/var/lib/docker/containers/65a501908096112bb2675466c1dd3aa24ab71cbe70391ffecee870c2dc6b0aa2/hosts",
	        "LogPath": "/var/lib/docker/containers/65a501908096112bb2675466c1dd3aa24ab71cbe70391ffecee870c2dc6b0aa2/65a501908096112bb2675466c1dd3aa24ab71cbe70391ffecee870c2dc6b0aa2-json.log",
	        "Name": "/no-preload-20210816221555-6487",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20210816221555-6487:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20210816221555-6487",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/119899da4ddcd887470ac68cc201dc9414c391f0c14a619ae7241b6c10f89bf3-init/diff:/var/lib/docker/overlay2/26ccb05b45c39eb8fb9a222c35182ae0b2281818311893cc1c820d93f21711ae/diff:/var/lib/docker/overlay2/cd717b1332cc33945d2b9d777346faa34d6bdfa47023daf36a4373532eccf421/diff:/var/lib/docker/overlay2/d9b45e6cc99a52e8f81dfe312587e9103ceac6c138634c90ec0d647cd90dcdc6/diff:/var/lib/docker/overlay2/253a269adcaa7871d92fb351ceb4d32421104cd62f741ba1c63d8e3f2e19a0b5/diff:/var/lib/docker/overlay2/42adedc2a706910fdba591e62cbfab57dd03d85b882f6eef38c3e131a0c52bea/diff:/var/lib/docker/overlay2/77ed2008192a16e91da5c6b9ccc6e71cea338a7cb24d331dc695e5f74fb04573/diff:/var/lib/docker/overlay2/29dccf868a296ce0889c929159fd3ac0dcdc00ba6e2ef864b056335954ffab4d/diff:/var/lib/docker/overlay2/592818f0a9ab8c99c2438562d9c5a479083cb08f41ad30ddefa9b493a8098150/diff:/var/lib/docker/overlay2/05fa44df096fbdb60f935bef69e952369e10cd8f1adc570c170c1f08d09743f7/diff:/var/lib/docker/overlay2/65cf48
1c471f231a663edc6331272317a851b0fd3395b2d6be43899f9b5443f9/diff:/var/lib/docker/overlay2/86a49202a7a2abea1a7a57c52f2e19cf991e9319b11dcf7f4105892413e0096e/diff:/var/lib/docker/overlay2/6f1a2a1805e90af512eef163f484935d57b0a0a72a65b5775c38efb5fe937d13/diff:/var/lib/docker/overlay2/f514eb992d78367550d2ce1c020771f984eff047302d6af3dcd71172270c0087/diff:/var/lib/docker/overlay2/cdae383a0302a2ed835985028caad813d6e16c7caf0e146005b47a4a1e94369b/diff:/var/lib/docker/overlay2/1fe02987c4f18db439e026431c44d1b8b233da766c4dcd331a4c20fd91e88748/diff:/var/lib/docker/overlay2/28bb6b828668bb682f3523f9f71fbd429ef6180cd98455625ea014ef4e532b77/diff:/var/lib/docker/overlay2/2878ae01f9dccb219097c2fc5873be677fe8a70e2bd936d25f018483a3d6c1a3/diff:/var/lib/docker/overlay2/da86efa9bad2d8b46c88440b1c4864add391f7cc9ee07d4227dd37d76c9ffd85/diff:/var/lib/docker/overlay2/93bf17ee1f2be70c70aa38b3aea9daa3a62e189181d1aa56acc52f5cf90f7436/diff:/var/lib/docker/overlay2/e8db603cd7ef789cc310bf7782375ee624d80ac0bceb3a3075900e67540c152a/diff:/var/lib/d
ocker/overlay2/1ba1c50d72480f49b835b34db6c3b21bccdf6530a8201631099a7e287dbf94a6/diff:/var/lib/docker/overlay2/2eeb0aecf508bc4a797d240b330bfa5b54b84a085737a96dad9faf9ea129d4ce/diff:/var/lib/docker/overlay2/40ba0dc970df76cce520350594f86384e121b47a190106559752e8f7cf138540/diff:/var/lib/docker/overlay2/8b80e2d0e53fb136919fd9ca7d4d198a92c324a19aa1827fcd672a3aa49a0dcc/diff:/var/lib/docker/overlay2/bd1d10c9a8876242ad61e3841c439177c3ae3a1967e5b053729107676134c622/diff:/var/lib/docker/overlay2/1f65ee5880c0ce6e027f2e72118cb956fcdcba9bb0806d98de6532bcbdfac6dd/diff:/var/lib/docker/overlay2/8bde131df5f26fa8ad65d4ddcb9ddb84de567427ffeb4aee76487489e530ef77/diff:/var/lib/docker/overlay2/73a68cda04f65298e5bd2581f0f16a8de96cd42145968d5c39b0c8ae3228a423/diff:/var/lib/docker/overlay2/0acfb650c0dfd3b9df005fe48f305b155d61d21914271c258e95bf42636e9bf8/diff:/var/lib/docker/overlay2/33e44b42ef88ffe285cb67e240248a629167dd1c83362d51a67679cbd866c30b/diff:/var/lib/docker/overlay2/c71ddf9e0475e4a3987bf3d616723e9d055a79c4bb6d92cf34fc67be2d9
c5134/diff:/var/lib/docker/overlay2/faece61d8cf45bab42cd499befcdb8d4548229311b35600cced068a3ce186025/diff:/var/lib/docker/overlay2/5e45db6ab7620f8baf25ff5ff01c806cd64575cdc89e5bb2965a25800093d5f9/diff:/var/lib/docker/overlay2/7324fe47bf39776cd61539f09f63fbb6f5e17aedc985bcc48bc12cdaaff4c4e4/diff:/var/lib/docker/overlay2/98a50fa9fa33654b2ed4ce8f7118f04d7004260519abca0d403b2ab337e9c273/diff:/var/lib/docker/overlay2/0e38707c109f3f1c54c7190021d525f898a86b07f2e51c86dc3e6d916eff3ed7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/119899da4ddcd887470ac68cc201dc9414c391f0c14a619ae7241b6c10f89bf3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/119899da4ddcd887470ac68cc201dc9414c391f0c14a619ae7241b6c10f89bf3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/119899da4ddcd887470ac68cc201dc9414c391f0c14a619ae7241b6c10f89bf3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-20210816221555-6487",
	                "Source": "/var/lib/docker/volumes/no-preload-20210816221555-6487/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20210816221555-6487",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20210816221555-6487",
	                "name.minikube.sigs.k8s.io": "no-preload-20210816221555-6487",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c83a586e6640736ff362f4190d336a19ec88791ccc7bf52861d1aa8e554aeef4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32934"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32933"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32930"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32932"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32931"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c83a586e6640",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20210816221555-6487": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "65a501908096"
	                    ],
	                    "NetworkID": "98b3ee991257792a6830dd24d04b5717e00f2ba3533153c90cff98d22c7a9c0d",
	                    "EndpointID": "fb0e430b9a84cef5766a8f4e0b1c88858a3b4a797039b210ee6891535c00f7cb",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210816221555-6487 -n no-preload-20210816221555-6487
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210816221555-6487 -n no-preload-20210816221555-6487: exit status 2 (329.496303ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20210816221555-6487 logs -n 25
helpers_test.go:253: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p no-preload-20210816221555-6487                 | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:55 UTC | Mon, 16 Aug 2021 22:17:51 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                |         |         |                               |                               |
	|         | --driver=docker                                   |                                                |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:17:59 UTC | Mon, 16 Aug 2021 22:17:59 UTC |
	|         | no-preload-20210816221555-6487                    |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:17:59 UTC | Mon, 16 Aug 2021 22:18:20 UTC |
	|         | no-preload-20210816221555-6487                    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:18:20 UTC | Mon, 16 Aug 2021 22:18:20 UTC |
	|         | no-preload-20210816221555-6487                    |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| start   | -p                                                | cert-options-20210816221525-6487               | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:25 UTC | Mon, 16 Aug 2021 22:19:09 UTC |
	|         | cert-options-20210816221525-6487                  |                                                |         |         |                               |                               |
	|         | --memory=2048                                     |                                                |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                         |                                                |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15                     |                                                |         |         |                               |                               |
	|         | --apiserver-names=localhost                       |                                                |         |         |                               |                               |
	|         | --apiserver-names=www.google.com                  |                                                |         |         |                               |                               |
	|         | --apiserver-port=8555                             |                                                |         |         |                               |                               |
	|         | --driver=docker                                   |                                                |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                                |         |         |                               |                               |
	| -p      | cert-options-20210816221525-6487                  | cert-options-20210816221525-6487               | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:09 UTC | Mon, 16 Aug 2021 22:19:10 UTC |
	|         | ssh openssl x509 -text -noout -in                 |                                                |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt             |                                                |         |         |                               |                               |
	| unpause | -p pause-20210816221349-6487                      | pause-20210816221349-6487                      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:11 UTC | Mon, 16 Aug 2021 22:19:12 UTC |
	|         | --alsologtostderr -v=5                            |                                                |         |         |                               |                               |
	| delete  | -p                                                | cert-options-20210816221525-6487               | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:10 UTC | Mon, 16 Aug 2021 22:19:13 UTC |
	|         | cert-options-20210816221525-6487                  |                                                |         |         |                               |                               |
	| -p      | pause-20210816221349-6487 logs                    | pause-20210816221349-6487                      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:29 UTC | Mon, 16 Aug 2021 22:19:29 UTC |
	|         | -n 25                                             |                                                |         |         |                               |                               |
	| -p      | pause-20210816221349-6487 logs                    | pause-20210816221349-6487                      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:30 UTC | Mon, 16 Aug 2021 22:19:31 UTC |
	|         | -n 25                                             |                                                |         |         |                               |                               |
	| delete  | -p pause-20210816221349-6487                      | pause-20210816221349-6487                      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:32 UTC | Mon, 16 Aug 2021 22:19:35 UTC |
	|         | --alsologtostderr -v=5                            |                                                |         |         |                               |                               |
	| profile | list --output json                                | minikube                                       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:35 UTC | Mon, 16 Aug 2021 22:19:38 UTC |
	| delete  | -p pause-20210816221349-6487                      | pause-20210816221349-6487                      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:38 UTC | Mon, 16 Aug 2021 22:19:38 UTC |
	| delete  | -p                                                | disable-driver-mounts-20210816221938-6487      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:38 UTC | Mon, 16 Aug 2021 22:19:39 UTC |
	|         | disable-driver-mounts-20210816221938-6487         |                                                |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:39 UTC | Mon, 16 Aug 2021 22:20:32 UTC |
	|         | default-k8s-different-port-20210816221939-6487    |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio         |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:40 UTC | Mon, 16 Aug 2021 22:20:41 UTC |
	|         | default-k8s-different-port-20210816221939-6487    |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:13 UTC | Mon, 16 Aug 2021 22:20:45 UTC |
	|         | embed-certs-20210816221913-6487                   |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                |         |         |                               |                               |
	|         | --driver=docker                                   |                                                |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:53 UTC | Mon, 16 Aug 2021 22:20:54 UTC |
	|         | embed-certs-20210816221913-6487                   |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:41 UTC | Mon, 16 Aug 2021 22:21:02 UTC |
	|         | default-k8s-different-port-20210816221939-6487    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:02 UTC | Mon, 16 Aug 2021 22:21:02 UTC |
	|         | default-k8s-different-port-20210816221939-6487    |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:54 UTC | Mon, 16 Aug 2021 22:21:15 UTC |
	|         | embed-certs-20210816221913-6487                   |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:15 UTC | Mon, 16 Aug 2021 22:21:15 UTC |
	|         | embed-certs-20210816221913-6487                   |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210816221555-6487                 | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:18:20 UTC | Mon, 16 Aug 2021 22:24:11 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                |         |         |                               |                               |
	|         | --driver=docker                                   |                                                |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:26 UTC | Mon, 16 Aug 2021 22:24:26 UTC |
	|         | no-preload-20210816221555-6487                    |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| -p      | no-preload-20210816221555-6487                    | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:29 UTC | Mon, 16 Aug 2021 22:24:29 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:21:15
	Running on machine: debian-jenkins-agent-13
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:21:11.560087  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:13.560955  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:15.620136  240293 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:21:15.620202  240293 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:21:15.620205  240293 out.go:311] Setting ErrFile to fd 2...
	I0816 22:21:15.620209  240293 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:21:15.620308  240293 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:21:15.620555  240293 out.go:305] Setting JSON to false
	I0816 22:21:15.655608  240293 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-13","uptime":3643,"bootTime":1629148833,"procs":318,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:21:15.655702  240293 start.go:121] virtualization: kvm guest
	I0816 22:21:15.658735  240293 out.go:177] * [embed-certs-20210816221913-6487] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:21:15.660463  240293 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:21:15.658858  240293 notify.go:169] Checking for updates...
	I0816 22:21:15.662037  240293 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:21:15.663349  240293 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:21:15.664728  240293 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:21:15.665147  240293 config.go:177] Loaded profile config "embed-certs-20210816221913-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 22:21:15.665515  240293 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:21:15.716323  240293 docker.go:132] docker version: linux-19.03.15
	I0816 22:21:15.716389  240293 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:21:15.794376  240293 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-16 22:21:15.751543454 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 22:21:15.794459  240293 docker.go:244] overlay module found
	I0816 22:21:15.796939  240293 out.go:177] * Using the docker driver based on existing profile
	I0816 22:21:15.796963  240293 start.go:278] selected driver: docker
	I0816 22:21:15.796970  240293 start.go:751] validating driver "docker" against &{Name:embed-certs-20210816221913-6487 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210816221913-6487 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Mult
iNodeRequested:false ExtraDisks:0}
	I0816 22:21:15.797067  240293 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0816 22:21:15.797107  240293 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:21:15.797127  240293 out.go:242] ! Your cgroup does not allow setting memory.
	I0816 22:21:15.798748  240293 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:21:15.799574  240293 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:21:15.879748  240293 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-16 22:21:15.836065685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0816 22:21:15.879884  240293 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:21:15.879939  240293 out.go:242] ! Your cgroup does not allow setting memory.
	I0816 22:21:15.882041  240293 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:21:15.882141  240293 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 22:21:15.882164  240293 cni.go:93] Creating CNI manager for ""
	I0816 22:21:15.882176  240293 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:21:15.882186  240293 start_flags.go:277] config:
	{Name:embed-certs-20210816221913-6487 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210816221913-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:21:15.883921  240293 out.go:177] * Starting control plane node embed-certs-20210816221913-6487 in cluster embed-certs-20210816221913-6487
	I0816 22:21:15.883958  240293 cache.go:117] Beginning downloading kic base image for docker with crio
	I0816 22:21:15.885412  240293 out.go:177] * Pulling base image ...
	I0816 22:21:15.885439  240293 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0816 22:21:15.885471  240293 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0816 22:21:15.885482  240293 cache.go:56] Caching tarball of preloaded images
	I0816 22:21:15.885556  240293 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0816 22:21:15.885647  240293 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 22:21:15.885664  240293 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0816 22:21:15.886141  240293 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/config.json ...
	I0816 22:21:15.972930  240293 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0816 22:21:15.972958  240293 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0816 22:21:15.972971  240293 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:21:15.973015  240293 start.go:313] acquiring machines lock for embed-certs-20210816221913-6487: {Name:mkaa6840e29b8ce519208ca05a6868b89ed678ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:21:15.973147  240293 start.go:317] acquired machines lock for "embed-certs-20210816221913-6487" in 87.665µs
	I0816 22:21:15.973167  240293 start.go:93] Skipping create...Using existing machine configuration
	I0816 22:21:15.973173  240293 fix.go:55] fixHost starting: 
	I0816 22:21:15.973391  240293 cli_runner.go:115] Run: docker container inspect embed-certs-20210816221913-6487 --format={{.State.Status}}
	I0816 22:21:16.011364  240293 fix.go:108] recreateIfNeeded on embed-certs-20210816221913-6487: state=Stopped err=<nil>
	W0816 22:21:16.011393  240293 fix.go:134] unexpected machine state, will restart: <nil>
	I0816 22:21:12.498256  238595 api_server.go:164] Checking apiserver status ...
	I0816 22:21:12.498339  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:12.511166  238595 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:12.698333  238595 api_server.go:164] Checking apiserver status ...
	I0816 22:21:12.698432  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:12.711462  238595 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:12.898762  238595 api_server.go:164] Checking apiserver status ...
	I0816 22:21:12.898833  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:12.912242  238595 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:12.912260  238595 api_server.go:164] Checking apiserver status ...
	I0816 22:21:12.912297  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:12.924044  238595 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:12.924063  238595 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0816 22:21:12.924069  238595 kubeadm.go:1032] stopping kube-system containers ...
	I0816 22:21:12.924078  238595 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 22:21:12.924115  238595 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:21:12.947173  238595 cri.go:76] found id: ""
	I0816 22:21:12.947231  238595 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0816 22:21:12.955761  238595 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:21:12.962021  238595 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Aug 16 22:19 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug 16 22:19 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2123 Aug 16 22:20 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Aug 16 22:19 /etc/kubernetes/scheduler.conf
	
	I0816 22:21:12.962069  238595 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0816 22:21:12.968271  238595 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0816 22:21:12.974263  238595 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0816 22:21:12.980161  238595 kubeadm.go:165] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:12.980208  238595 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 22:21:12.985775  238595 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0816 22:21:12.991569  238595 kubeadm.go:165] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:12.991613  238595 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 22:21:12.997245  238595 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:21:13.003145  238595 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0816 22:21:13.003164  238595 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:21:13.063198  238595 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:21:13.524913  238595 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:21:13.647844  238595 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:21:13.728361  238595 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:21:13.785589  238595 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:21:13.785640  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:14.298633  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:14.798169  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:15.299035  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:15.799027  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:16.298175  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:16.798961  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:17.298777  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:15.557824  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:17.557988  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:16.013864  240293 out.go:177] * Restarting existing docker container for "embed-certs-20210816221913-6487" ...
	I0816 22:21:16.013932  240293 cli_runner.go:115] Run: docker start embed-certs-20210816221913-6487
	I0816 22:21:17.275157  240293 cli_runner.go:168] Completed: docker start embed-certs-20210816221913-6487: (1.261184914s)
	I0816 22:21:17.275241  240293 cli_runner.go:115] Run: docker container inspect embed-certs-20210816221913-6487 --format={{.State.Status}}
	I0816 22:21:17.315858  240293 kic.go:420] container "embed-certs-20210816221913-6487" state is running.
	I0816 22:21:17.316215  240293 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210816221913-6487
	I0816 22:21:17.360466  240293 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/config.json ...
	I0816 22:21:17.360650  240293 machine.go:88] provisioning docker machine ...
	I0816 22:21:17.360673  240293 ubuntu.go:169] provisioning hostname "embed-certs-20210816221913-6487"
	I0816 22:21:17.360721  240293 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:21:17.406246  240293 main.go:130] libmachine: Using SSH client type: native
	I0816 22:21:17.406446  240293 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32959 <nil> <nil>}
	I0816 22:21:17.406464  240293 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20210816221913-6487 && echo "embed-certs-20210816221913-6487" | sudo tee /etc/hostname
	I0816 22:21:17.406998  240293 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54360->127.0.0.1:32959: read: connection reset by peer
	I0816 22:21:16.061401  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:18.560309  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:20.561398  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:17.798143  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:18.298261  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:18.798948  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:19.298814  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:19.798123  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:20.298107  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:20.333342  238595 api_server.go:70] duration metric: took 6.547748926s to wait for apiserver process to appear ...
	I0816 22:21:20.333376  238595 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:21:20.333390  238595 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0816 22:21:20.333866  238595 api_server.go:255] stopped: https://192.168.49.2:8444/healthz: Get "https://192.168.49.2:8444/healthz": dial tcp 192.168.49.2:8444: connect: connection refused
	I0816 22:21:20.834559  238595 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0816 22:21:20.607653  240293 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20210816221913-6487
	
	I0816 22:21:20.607740  240293 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:21:20.659003  240293 main.go:130] libmachine: Using SSH client type: native
	I0816 22:21:20.659183  240293 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32959 <nil> <nil>}
	I0816 22:21:20.659208  240293 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20210816221913-6487' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20210816221913-6487/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20210816221913-6487' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:21:20.783095  240293 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:21:20.783122  240293 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem ServerCertRemotePath:/e
tc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:21:20.783145  240293 ubuntu.go:177] setting up certificates
	I0816 22:21:20.783165  240293 provision.go:83] configureAuth start
	I0816 22:21:20.783220  240293 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210816221913-6487
	I0816 22:21:20.830020  240293 provision.go:138] copyHostCerts
	I0816 22:21:20.830093  240293 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:21:20.830106  240293 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:21:20.830159  240293 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:21:20.830261  240293 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:21:20.830279  240293 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:21:20.830300  240293 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:21:20.830379  240293 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:21:20.830389  240293 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:21:20.830408  240293 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0816 22:21:20.830465  240293 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20210816221913-6487 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20210816221913-6487]
	I0816 22:21:20.944596  240293 provision.go:172] copyRemoteCerts
	I0816 22:21:20.944660  240293 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:21:20.944698  240293 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:21:20.987627  240293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32959 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:21:21.078832  240293 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:21:21.095178  240293 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0816 22:21:21.110394  240293 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 22:21:21.128860  240293 provision.go:86] duration metric: configureAuth took 345.684672ms
	I0816 22:21:21.128885  240293 ubuntu.go:193] setting minikube options for container-runtime
	I0816 22:21:21.129071  240293 config.go:177] Loaded profile config "embed-certs-20210816221913-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 22:21:21.129211  240293 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:21:21.178067  240293 main.go:130] libmachine: Using SSH client type: native
	I0816 22:21:21.178225  240293 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32959 <nil> <nil>}
	I0816 22:21:21.178249  240293 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 22:21:21.688631  240293 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 22:21:21.688661  240293 machine.go:91] provisioned docker machine in 4.327996373s
	I0816 22:21:21.688673  240293 start.go:267] post-start starting for "embed-certs-20210816221913-6487" (driver="docker")
	I0816 22:21:21.688686  240293 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:21:21.688733  240293 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:21:21.688776  240293 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:21:21.742815  240293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32959 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:21:21.831891  240293 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:21:21.834999  240293 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0816 22:21:21.835025  240293 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0816 22:21:21.835039  240293 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0816 22:21:21.835049  240293 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0816 22:21:21.835063  240293 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:21:21.835113  240293 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:21:21.835204  240293 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem -> 64872.pem in /etc/ssl/certs
	I0816 22:21:21.835315  240293 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:21:21.844143  240293 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:21:21.861729  240293 start.go:270] post-start completed in 173.038547ms
	I0816 22:21:21.861791  240293 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 22:21:21.861839  240293 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:21:21.904829  240293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32959 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:21:21.991978  240293 fix.go:57] fixHost completed within 6.018798061s
	I0816 22:21:21.992001  240293 start.go:80] releasing machines lock for "embed-certs-20210816221913-6487", held for 6.018842392s
	I0816 22:21:21.992085  240293 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210816221913-6487
	I0816 22:21:22.036698  240293 ssh_runner.go:149] Run: systemctl --version
	I0816 22:21:22.036732  240293 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:21:22.036762  240293 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:21:22.036793  240293 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:21:22.090133  240293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32959 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:21:22.090549  240293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32959 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:21:22.209931  240293 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0816 22:21:22.220079  240293 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0816 22:21:22.230101  240293 docker.go:153] disabling docker service ...
	I0816 22:21:22.230151  240293 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:21:22.240560  240293 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:21:22.249789  240293 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:21:22.317235  240293 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:21:22.384532  240293 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:21:22.394190  240293 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:21:22.406563  240293 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0816 22:21:22.414255  240293 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0816 22:21:22.414278  240293 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0816 22:21:22.421925  240293 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:21:22.428474  240293 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:21:22.428529  240293 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:21:22.435893  240293 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:21:22.441981  240293 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:21:22.516727  240293 ssh_runner.go:149] Run: sudo systemctl start crio
	I0816 22:21:22.526452  240293 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 22:21:22.526514  240293 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:21:22.529600  240293 start.go:413] Will wait 60s for crictl version
	I0816 22:21:22.529654  240293 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:21:22.563406  240293 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0816 22:21:22.563494  240293 ssh_runner.go:149] Run: crio --version
	I0816 22:21:22.625584  240293 ssh_runner.go:149] Run: crio --version
	I0816 22:21:20.058201  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:22.058236  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:22.691739  240293 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	I0816 22:21:22.691826  240293 cli_runner.go:115] Run: docker network inspect embed-certs-20210816221913-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 22:21:22.738471  240293 ssh_runner.go:149] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0816 22:21:22.742507  240293 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:21:22.751708  240293 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0816 22:21:22.751767  240293 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:21:22.780397  240293 crio.go:424] all images are preloaded for cri-o runtime.
	I0816 22:21:22.780419  240293 crio.go:333] Images already preloaded, skipping extraction
	I0816 22:21:22.780466  240293 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:21:22.802073  240293 crio.go:424] all images are preloaded for cri-o runtime.
	I0816 22:21:22.802099  240293 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:21:22.802167  240293 ssh_runner.go:149] Run: crio config
	I0816 22:21:22.873852  240293 cni.go:93] Creating CNI manager for ""
	I0816 22:21:22.873875  240293 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:21:22.873884  240293 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0816 22:21:22.873896  240293 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20210816221913-6487 NodeName:embed-certs-20210816221913-6487 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/va
r/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:21:22.874056  240293 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "embed-certs-20210816221913-6487"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:21:22.874158  240293 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=embed-certs-20210816221913-6487 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210816221913-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0816 22:21:22.874214  240293 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0816 22:21:22.892215  240293 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:21:22.892279  240293 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:21:22.898805  240293 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (562 bytes)
	I0816 22:21:22.910838  240293 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 22:21:22.922709  240293 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2072 bytes)
	I0816 22:21:22.935781  240293 ssh_runner.go:149] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0816 22:21:22.938475  240293 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:21:22.946612  240293 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487 for IP: 192.168.76.2
	I0816 22:21:22.946658  240293 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:21:22.946680  240293 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:21:22.946734  240293 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/client.key
	I0816 22:21:22.946758  240293 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/apiserver.key.31bdca25
	I0816 22:21:22.946785  240293 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/proxy-client.key
	I0816 22:21:22.946930  240293 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem (1338 bytes)
	W0816 22:21:22.946980  240293 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487_empty.pem, impossibly tiny 0 bytes
	I0816 22:21:22.946995  240293 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 22:21:22.947031  240293 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:21:22.947069  240293 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:21:22.947100  240293 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0816 22:21:22.947152  240293 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:21:22.948307  240293 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:21:22.963707  240293 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 22:21:22.978856  240293 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:21:22.994139  240293 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816221913-6487/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 22:21:23.010398  240293 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:21:23.025797  240293 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 22:21:23.042070  240293 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:21:23.058339  240293 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 22:21:23.073522  240293 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:21:23.092171  240293 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem --> /usr/share/ca-certificates/6487.pem (1338 bytes)
	I0816 22:21:23.112153  240293 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /usr/share/ca-certificates/64872.pem (1708 bytes)
	I0816 22:21:23.127612  240293 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:21:23.138860  240293 ssh_runner.go:149] Run: openssl version
	I0816 22:21:23.143303  240293 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/64872.pem && ln -fs /usr/share/ca-certificates/64872.pem /etc/ssl/certs/64872.pem"
	I0816 22:21:23.149943  240293 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/64872.pem
	I0816 22:21:23.152702  240293 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:50 /usr/share/ca-certificates/64872.pem
	I0816 22:21:23.152740  240293 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/64872.pem
	I0816 22:21:23.157086  240293 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/64872.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:21:23.162960  240293 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:21:23.169440  240293 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:21:23.172300  240293 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:21:23.172333  240293 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:21:23.176717  240293 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:21:23.183108  240293 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6487.pem && ln -fs /usr/share/ca-certificates/6487.pem /etc/ssl/certs/6487.pem"
	I0816 22:21:23.189796  240293 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6487.pem
	I0816 22:21:23.192572  240293 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:50 /usr/share/ca-certificates/6487.pem
	I0816 22:21:23.192617  240293 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6487.pem
	I0816 22:21:23.196953  240293 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6487.pem /etc/ssl/certs/51391683.0"
	I0816 22:21:23.202869  240293 kubeadm.go:390] StartCluster: {Name:embed-certs-20210816221913-6487 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210816221913-6487 Namespace:default APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false
ExtraDisks:0}
	I0816 22:21:23.202969  240293 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 22:21:23.203000  240293 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:21:23.225669  240293 cri.go:76] found id: ""
	I0816 22:21:23.225727  240293 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:21:23.231865  240293 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0816 22:21:23.231889  240293 kubeadm.go:600] restartCluster start
	I0816 22:21:23.231953  240293 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0816 22:21:23.237613  240293 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:23.238927  240293 kubeconfig.go:117] verify returned: extract IP: "embed-certs-20210816221913-6487" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:21:23.239535  240293 kubeconfig.go:128] "embed-certs-20210816221913-6487" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0816 22:21:23.240624  240293 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:21:23.243940  240293 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 22:21:23.249774  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:23.249811  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:23.261064  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:23.461439  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:23.461533  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:23.474779  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:23.662020  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:23.662099  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:23.675194  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:23.861451  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:23.861530  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:23.874438  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:24.061711  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:24.061771  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:24.074758  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:24.262079  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:24.262150  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:24.275524  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:24.461740  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:24.461804  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:24.474796  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:24.662097  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:24.662170  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:24.675520  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:24.861779  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:24.861849  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:24.874503  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:25.061773  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:25.061835  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:25.074999  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:25.261220  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:25.261320  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:25.274763  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:25.462071  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:25.462139  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:25.475399  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:22.563949  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:25.061018  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:24.749578  238595 api_server.go:265] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:21:24.749607  238595 api_server.go:101] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:21:24.834697  238595 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0816 22:21:24.839645  238595 api_server.go:265] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:21:24.839665  238595 api_server.go:101] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:21:25.334041  238595 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0816 22:21:25.338514  238595 api_server.go:265] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:21:25.338534  238595 api_server.go:101] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:21:25.834845  238595 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0816 22:21:25.841942  238595 api_server.go:265] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:21:25.841973  238595 api_server.go:101] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:21:26.334549  238595 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0816 22:21:26.339143  238595 api_server.go:265] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0816 22:21:26.345794  238595 api_server.go:139] control plane version: v1.21.3
	I0816 22:21:26.345821  238595 api_server.go:129] duration metric: took 6.012437633s to wait for apiserver health ...
	I0816 22:21:26.345834  238595 cni.go:93] Creating CNI manager for ""
	I0816 22:21:26.345842  238595 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:21:26.348018  238595 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 22:21:26.348067  238595 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0816 22:21:26.351496  238595 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0816 22:21:26.351513  238595 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0816 22:21:26.364862  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 22:21:26.663402  238595 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:21:26.673137  238595 system_pods.go:59] 9 kube-system pods found
	I0816 22:21:26.673169  238595 system_pods.go:61] "coredns-558bd4d5db-zv2bp" [3f9aaeed-d94b-4e8e-8ee1-8b4e7e2bad94] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:21:26.673175  238595 system_pods.go:61] "etcd-default-k8s-different-port-20210816221939-6487" [1edca304-f707-4964-9308-e948e78bbe97] Running
	I0816 22:21:26.673180  238595 system_pods.go:61] "kindnet-dlmtk" [44f0eada-8ea2-426e-9c49-cabf7add8b7c] Running
	I0816 22:21:26.673190  238595 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210816221939-6487" [18a403fd-f4c6-4bc4-bace-a7ee9d3397d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 22:21:26.673201  238595 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210816221939-6487" [79dc21b4-76c3-4911-96e4-3296115b78e9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 22:21:26.673211  238595 system_pods.go:61] "kube-proxy-zb9nn" [3acb9237-9d5e-44cc-8304-181f590ae0ef] Running
	I0816 22:21:26.673223  238595 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210816221939-6487" [18bc905c-fe6d-4e42-b9d1-f09bcac3a454] Running
	I0816 22:21:26.673232  238595 system_pods.go:61] "metrics-server-7c784ccb57-z8svs" [5f3b5b23-5056-4e3f-bc57-ae88895f06ea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:21:26.673239  238595 system_pods.go:61] "storage-provisioner" [a2136862-0a6e-4594-8f38-bed49ceca1af] Running
	I0816 22:21:26.673244  238595 system_pods.go:74] duration metric: took 9.823114ms to wait for pod list to return data ...
	I0816 22:21:26.673253  238595 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:21:26.676339  238595 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:21:26.676371  238595 node_conditions.go:123] node cpu capacity is 8
	I0816 22:21:26.676381  238595 node_conditions.go:105] duration metric: took 3.124111ms to run NodePressure ...
	I0816 22:21:26.676397  238595 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:21:27.128593  238595 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0816 22:21:27.132902  238595 kubeadm.go:746] kubelet initialised
	I0816 22:21:27.132928  238595 kubeadm.go:747] duration metric: took 4.310514ms waiting for restarted kubelet to initialise ...
	I0816 22:21:27.132936  238595 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:21:27.139524  238595 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-zv2bp" in "kube-system" namespace to be "Ready" ...
	I0816 22:21:24.558923  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:27.058005  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:25.661879  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:25.661942  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:25.674926  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:25.861996  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:25.862079  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:25.876444  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:26.061650  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:26.061708  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:26.074752  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:26.262045  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:26.262107  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:26.274819  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:26.274838  240293 api_server.go:164] Checking apiserver status ...
	I0816 22:21:26.274877  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:21:26.286135  240293 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:26.286151  240293 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0816 22:21:26.286157  240293 kubeadm.go:1032] stopping kube-system containers ...
	I0816 22:21:26.286166  240293 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 22:21:26.286204  240293 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:21:26.332734  240293 cri.go:76] found id: ""
	I0816 22:21:26.332793  240293 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0816 22:21:26.342135  240293 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:21:26.349389  240293 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5643 Aug 16 22:19 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug 16 22:19 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Aug 16 22:19 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 16 22:19 /etc/kubernetes/scheduler.conf
	
	I0816 22:21:26.349444  240293 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 22:21:26.356234  240293 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 22:21:26.363575  240293 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 22:21:26.370139  240293 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:26.370188  240293 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 22:21:26.377375  240293 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 22:21:26.385240  240293 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:21:26.385288  240293 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 22:21:26.391547  240293 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:21:26.399103  240293 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0816 22:21:26.399125  240293 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:21:26.470913  240293 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:21:27.372191  240293 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:21:27.532311  240293 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:21:27.631997  240293 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:21:27.703108  240293 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:21:27.703169  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:28.217409  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:28.717450  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:29.217644  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:29.717503  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:30.217609  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:27.563642  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:30.059977  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:29.154595  238595 pod_ready.go:102] pod "coredns-558bd4d5db-zv2bp" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:31.155291  238595 pod_ready.go:102] pod "coredns-558bd4d5db-zv2bp" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:29.557639  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:32.057361  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:30.717020  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:31.217129  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:31.716944  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:32.217589  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:32.717615  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:33.217826  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:33.717123  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:34.217056  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:21:34.236648  240293 api_server.go:70] duration metric: took 6.533533708s to wait for apiserver process to appear ...
	I0816 22:21:34.236675  240293 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:21:34.236687  240293 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0816 22:21:34.237080  240293 api_server.go:255] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0816 22:21:34.737550  240293 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0816 22:21:32.060908  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:34.061201  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:33.656544  238595 pod_ready.go:102] pod "coredns-558bd4d5db-zv2bp" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:34.655844  238595 pod_ready.go:92] pod "coredns-558bd4d5db-zv2bp" in "kube-system" namespace has status "Ready":"True"
	I0816 22:21:34.655868  238595 pod_ready.go:81] duration metric: took 7.516321054s waiting for pod "coredns-558bd4d5db-zv2bp" in "kube-system" namespace to be "Ready" ...
	I0816 22:21:34.655882  238595 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:21:36.665562  238595 pod_ready.go:102] pod "etcd-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:34.058752  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:36.558116  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:38.423548  240293 api_server.go:265] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:21:38.423576  240293 api_server.go:101] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:21:38.737956  240293 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0816 22:21:38.742611  240293 api_server.go:265] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:21:38.742646  240293 api_server.go:101] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:21:39.238185  240293 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0816 22:21:39.243127  240293 api_server.go:265] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:21:39.243156  240293 api_server.go:101] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:21:39.737204  240293 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0816 22:21:39.744129  240293 api_server.go:265] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0816 22:21:39.750956  240293 api_server.go:139] control plane version: v1.21.3
	I0816 22:21:39.750980  240293 api_server.go:129] duration metric: took 5.51429844s to wait for apiserver health ...
	I0816 22:21:39.750991  240293 cni.go:93] Creating CNI manager for ""
	I0816 22:21:39.751006  240293 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:21:39.752807  240293 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 22:21:39.752886  240293 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0816 22:21:39.756576  240293 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0816 22:21:39.756596  240293 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0816 22:21:39.769956  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 22:21:40.241874  240293 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:21:40.254628  240293 system_pods.go:59] 9 kube-system pods found
	I0816 22:21:40.254668  240293 system_pods.go:61] "coredns-558bd4d5db-s5rfs" [7146b870-68ad-407d-b3b2-bb620597d79a] Running
	I0816 22:21:40.254677  240293 system_pods.go:61] "etcd-embed-certs-20210816221913-6487" [94512d56-b2c2-427f-8b2d-c21aacd20a0e] Running
	I0816 22:21:40.254682  240293 system_pods.go:61] "kindnet-jx4gt" [569ce0ea-ff92-4730-a357-d37774ec5a9d] Running
	I0816 22:21:40.254688  240293 system_pods.go:61] "kube-apiserver-embed-certs-20210816221913-6487" [1d8f5d6e-36f4-4898-bcb8-3f2eba68010b] Running
	I0816 22:21:40.254700  240293 system_pods.go:61] "kube-controller-manager-embed-certs-20210816221913-6487" [ef9f2613-17c4-4bdb-b74b-299fc20cf91d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 22:21:40.254707  240293 system_pods.go:61] "kube-proxy-ldxgj" [78bb138b-ce0a-41f2-a0c6-a4000e1146e2] Running
	I0816 22:21:40.254718  240293 system_pods.go:61] "kube-scheduler-embed-certs-20210816221913-6487" [e14e1b22-e7f9-4a56-9585-b24426755bdb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 22:21:40.254727  240293 system_pods.go:61] "metrics-server-7c784ccb57-pqdqv" [07fd0b54-2e74-462c-a333-e4bb7cdc6570] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:21:40.254733  240293 system_pods.go:61] "storage-provisioner" [f7dea14f-5a94-4bc2-9b3f-0f51d33cd218] Running
	I0816 22:21:40.254741  240293 system_pods.go:74] duration metric: took 12.847497ms to wait for pod list to return data ...
	I0816 22:21:40.254761  240293 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:21:40.258773  240293 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:21:40.258801  240293 node_conditions.go:123] node cpu capacity is 8
	I0816 22:21:40.258816  240293 node_conditions.go:105] duration metric: took 4.048388ms to run NodePressure ...
	I0816 22:21:40.258833  240293 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:21:36.561262  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:39.060084  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:38.666063  238595 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:21:38.666091  238595 pod_ready.go:81] duration metric: took 4.010195402s waiting for pod "etcd-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:21:38.666109  238595 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:21:38.670127  238595 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:21:38.670145  238595 pod_ready.go:81] duration metric: took 4.027483ms waiting for pod "kube-apiserver-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:21:38.670156  238595 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:21:38.674043  238595 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:21:38.674062  238595 pod_ready.go:81] duration metric: took 3.899459ms waiting for pod "kube-controller-manager-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:21:38.674075  238595 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zb9nn" in "kube-system" namespace to be "Ready" ...
	I0816 22:21:38.677789  238595 pod_ready.go:92] pod "kube-proxy-zb9nn" in "kube-system" namespace has status "Ready":"True"
	I0816 22:21:38.677807  238595 pod_ready.go:81] duration metric: took 3.723435ms waiting for pod "kube-proxy-zb9nn" in "kube-system" namespace to be "Ready" ...
	I0816 22:21:38.677818  238595 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:21:38.681155  238595 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:21:38.681177  238595 pod_ready.go:81] duration metric: took 3.350092ms waiting for pod "kube-scheduler-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:21:38.681187  238595 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace to be "Ready" ...
	I0816 22:21:41.069361  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:39.058024  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:41.558288  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:40.822751  240293 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0816 22:21:40.826839  240293 kubeadm.go:746] kubelet initialised
	I0816 22:21:40.826858  240293 kubeadm.go:747] duration metric: took 4.079302ms waiting for restarted kubelet to initialise ...
	I0816 22:21:40.826865  240293 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:21:40.832096  240293 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace to be "Ready" ...
	I0816 22:21:42.846724  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:44.847780  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:41.560606  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:44.060244  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:43.070196  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:45.569472  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:44.057149  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:46.057919  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:48.557352  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:47.346793  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:49.347215  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:46.560497  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:49.060150  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:47.570048  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:50.070010  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:52.070043  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:51.057621  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:53.557774  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:51.348115  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:53.417886  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:51.060747  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:53.561154  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:54.570252  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:57.070456  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:55.557834  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:58.057474  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:55.846812  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:57.849366  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:00.348217  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:56.060197  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:58.560545  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:00.561053  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:21:59.569461  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:01.569894  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:00.556730  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:02.558818  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:02.846906  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:04.847312  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:03.059324  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:05.060842  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:03.570105  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:06.069210  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:05.057740  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:07.557500  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:07.349324  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:09.848075  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:07.060941  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:09.061382  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:08.069720  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:10.069872  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:12.069927  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:10.057865  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:12.557313  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:12.346961  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:14.347554  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:11.560761  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:14.060556  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:14.570237  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:17.070162  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:14.557390  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:16.557840  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:18.557906  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:16.347642  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:18.847528  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:16.560811  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:19.060192  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:19.070665  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:21.569617  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:21.057593  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:23.057769  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:21.350304  240293 pod_ready.go:102] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:22.349597  240293 pod_ready.go:92] pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace has status "Ready":"True"
	I0816 22:22:22.349624  240293 pod_ready.go:81] duration metric: took 41.517503384s waiting for pod "coredns-558bd4d5db-s5rfs" in "kube-system" namespace to be "Ready" ...
	I0816 22:22:22.349635  240293 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:22:22.353381  240293 pod_ready.go:92] pod "etcd-embed-certs-20210816221913-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:22:22.353398  240293 pod_ready.go:81] duration metric: took 3.753707ms waiting for pod "etcd-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:22:22.353416  240293 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:22:22.357098  240293 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210816221913-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:22:22.357119  240293 pod_ready.go:81] duration metric: took 3.696004ms waiting for pod "kube-apiserver-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:22:22.357128  240293 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:22:22.360781  240293 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210816221913-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:22:22.360801  240293 pod_ready.go:81] duration metric: took 3.66616ms waiting for pod "kube-controller-manager-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:22:22.360813  240293 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ldxgj" in "kube-system" namespace to be "Ready" ...
	I0816 22:22:22.364443  240293 pod_ready.go:92] pod "kube-proxy-ldxgj" in "kube-system" namespace has status "Ready":"True"
	I0816 22:22:22.364458  240293 pod_ready.go:81] duration metric: took 3.637945ms waiting for pod "kube-proxy-ldxgj" in "kube-system" namespace to be "Ready" ...
	I0816 22:22:22.364465  240293 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:22:22.746026  240293 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210816221913-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:22:22.746046  240293 pod_ready.go:81] duration metric: took 381.574921ms waiting for pod "kube-scheduler-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:22:22.746057  240293 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace to be "Ready" ...
	I0816 22:22:25.151863  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:21.060611  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:23.560974  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:24.070906  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:26.570138  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:25.557675  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:27.557835  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:27.650260  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:29.651969  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:26.059835  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:28.060143  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:30.559808  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:29.068839  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:31.069013  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:29.557981  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:32.057453  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:32.150995  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:34.650482  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:32.560209  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:34.560989  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:33.073518  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:35.570047  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:34.057526  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:36.556985  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:38.557172  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:37.151419  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:39.152016  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:37.060468  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:39.060676  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:38.069959  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:40.070255  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:41.057437  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:42.552389  213866 pod_ready.go:81] duration metric: took 4m0.400173208s waiting for pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace to be "Ready" ...
	E0816 22:22:42.552422  213866 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-8546d8b77b-lxnps" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 22:22:42.552443  213866 pod_ready.go:38] duration metric: took 4m3.999681718s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:22:42.552468  213866 kubeadm.go:604] restartCluster took 4m52.206606075s
	W0816 22:22:42.552592  213866 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0816 22:22:42.552618  213866 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 22:22:41.650639  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:44.151997  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:41.560954  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:44.060209  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:42.570273  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:45.070127  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:47.070330  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:46.650980  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:49.150679  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:46.560003  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:48.560502  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:49.570288  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:52.069874  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:51.151426  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:53.651600  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:51.060391  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:53.559430  218005 pod_ready.go:102] pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:55.556159  218005 pod_ready.go:81] duration metric: took 4m0.006201053s waiting for pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace to be "Ready" ...
	E0816 22:22:55.556182  218005 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-5nw62" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 22:22:55.556209  218005 pod_ready.go:38] duration metric: took 4m11.600806981s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:22:55.556239  218005 kubeadm.go:604] restartCluster took 4m27.564944846s
	W0816 22:22:55.556359  218005 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0816 22:22:55.556394  218005 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 22:22:54.569550  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:56.570201  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:59.549186  213866 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (16.996549042s)
	I0816 22:22:59.549246  213866 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:22:59.558782  213866 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 22:22:59.558841  213866 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:22:59.582632  213866 cri.go:76] found id: ""
	I0816 22:22:59.582680  213866 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:22:59.589250  213866 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0816 22:22:59.589313  213866 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:22:59.595410  213866 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:22:59.595449  213866 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0816 22:22:56.151598  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:58.151860  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:00.152384  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:22:59.903823  213866 out.go:204]   - Generating certificates and keys ...
	I0816 22:22:58.570653  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:01.070217  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:00.931087  213866 out.go:204]   - Booting up control plane ...
	I0816 22:23:02.153047  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:04.651269  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:03.070450  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:05.569721  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:07.151505  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:09.652219  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:10.973536  213866 out.go:204]   - Configuring RBAC rules ...
	I0816 22:23:11.388904  213866 cni.go:93] Creating CNI manager for ""
	I0816 22:23:11.388927  213866 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:23:07.570517  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:10.070537  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:11.390636  213866 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 22:23:11.390703  213866 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0816 22:23:11.394246  213866 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.14.0/kubectl ...
	I0816 22:23:11.394264  213866 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0816 22:23:11.406315  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 22:23:11.608487  213866 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:23:11.608574  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:11.608583  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=old-k8s-version-20210816221528-6487 minikube.k8s.io/updated_at=2021_08_16T22_23_11_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:11.713735  213866 ops.go:34] apiserver oom_adj: 16
	I0816 22:23:11.713761  213866 ops.go:39] adjusting apiserver oom_adj to -10
	I0816 22:23:11.713784  213866 ssh_runner.go:149] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:23:11.713864  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:12.275854  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:12.776002  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:13.276134  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:12.151756  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:14.650904  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:12.570276  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:14.570427  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:17.072069  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:13.775987  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:14.276358  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:14.775456  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:15.276056  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:15.775401  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:16.276191  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:16.775833  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:17.276185  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:17.775937  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:18.276372  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:17.151526  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:19.651003  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:19.570330  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:22.069741  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:18.775755  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:19.276305  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:19.775502  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:20.276246  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:20.775975  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:21.275393  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:21.776071  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:22.276202  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:22.776321  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:23.275752  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:22.151401  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:24.152069  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:24.069903  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:26.071687  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:23.775524  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:24.276016  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:24.775772  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:25.276335  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:25.775974  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:26.275589  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:26.775735  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:27.275457  213866 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:27.341729  213866 kubeadm.go:985] duration metric: took 15.733205702s to wait for elevateKubeSystemPrivileges.
	I0816 22:23:27.341763  213866 kubeadm.go:392] StartCluster complete in 5m37.024207841s
	I0816 22:23:27.341785  213866 settings.go:142] acquiring lock: {Name:mk71c9e00e0a208f5191d6b85d29a074b46503a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:23:27.341894  213866 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:23:27.343738  213866 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:23:27.861292  213866 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20210816221528-6487" rescaled to 1
	I0816 22:23:27.861357  213866 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0816 22:23:27.863262  213866 out.go:177] * Verifying Kubernetes components...
	I0816 22:23:27.863325  213866 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:23:27.861411  213866 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:23:27.861427  213866 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:23:27.861565  213866 config.go:177] Loaded profile config "old-k8s-version-20210816221528-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0816 22:23:27.863424  213866 addons.go:59] Setting dashboard=true in profile "old-k8s-version-20210816221528-6487"
	I0816 22:23:27.863441  213866 addons.go:59] Setting default-storageclass=true in profile "old-k8s-version-20210816221528-6487"
	I0816 22:23:27.863445  213866 addons.go:135] Setting addon dashboard=true in "old-k8s-version-20210816221528-6487"
	W0816 22:23:27.863454  213866 addons.go:147] addon dashboard should already be in state true
	I0816 22:23:27.863466  213866 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20210816221528-6487"
	I0816 22:23:27.863417  213866 addons.go:59] Setting storage-provisioner=true in profile "old-k8s-version-20210816221528-6487"
	I0816 22:23:27.863485  213866 host.go:66] Checking if "old-k8s-version-20210816221528-6487" exists ...
	I0816 22:23:27.863497  213866 addons.go:135] Setting addon storage-provisioner=true in "old-k8s-version-20210816221528-6487"
	I0816 22:23:27.863429  213866 addons.go:59] Setting metrics-server=true in profile "old-k8s-version-20210816221528-6487"
	I0816 22:23:27.863532  213866 addons.go:135] Setting addon metrics-server=true in "old-k8s-version-20210816221528-6487"
	W0816 22:23:27.863541  213866 addons.go:147] addon metrics-server should already be in state true
	I0816 22:23:27.863566  213866 host.go:66] Checking if "old-k8s-version-20210816221528-6487" exists ...
	W0816 22:23:27.863509  213866 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:23:27.863643  213866 host.go:66] Checking if "old-k8s-version-20210816221528-6487" exists ...
	I0816 22:23:27.863788  213866 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210816221528-6487 --format={{.State.Status}}
	I0816 22:23:27.864032  213866 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210816221528-6487 --format={{.State.Status}}
	I0816 22:23:27.864037  213866 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210816221528-6487 --format={{.State.Status}}
	I0816 22:23:27.864252  213866 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210816221528-6487 --format={{.State.Status}}
	I0816 22:23:27.927695  213866 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:23:27.927754  213866 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:23:27.927763  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:23:27.927817  213866 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210816221528-6487
	I0816 22:23:27.929360  213866 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:23:27.930891  213866 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:23:27.930964  213866 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:23:27.930976  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:23:27.931027  213866 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210816221528-6487
	I0816 22:23:27.935860  213866 addons.go:135] Setting addon default-storageclass=true in "old-k8s-version-20210816221528-6487"
	W0816 22:23:27.935883  213866 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:23:27.935953  213866 host.go:66] Checking if "old-k8s-version-20210816221528-6487" exists ...
	I0816 22:23:27.936440  213866 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210816221528-6487 --format={{.State.Status}}
	I0816 22:23:27.938570  213866 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:23:27.938670  213866 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:23:27.938685  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:23:27.938730  213866 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210816221528-6487
	I0816 22:23:27.964347  213866 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20210816221528-6487" to be "Ready" ...
	I0816 22:23:27.964621  213866 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:23:27.984062  213866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32929 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816221528-6487/id_rsa Username:docker}
	I0816 22:23:27.991995  213866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32929 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816221528-6487/id_rsa Username:docker}
	I0816 22:23:28.006061  213866 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:23:28.006083  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:23:28.006142  213866 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210816221528-6487
	I0816 22:23:28.024841  213866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32929 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816221528-6487/id_rsa Username:docker}
	I0816 22:23:28.060997  213866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32929 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816221528-6487/id_rsa Username:docker}
	I0816 22:23:28.128450  213866 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:23:28.128477  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:23:28.128606  213866 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:23:28.128624  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:23:28.148405  213866 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:23:28.148429  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:23:28.212613  213866 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:23:28.212639  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:23:28.227353  213866 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:23:28.227378  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:23:28.229886  213866 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:23:28.231855  213866 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:23:28.231872  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:23:28.241746  213866 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:23:28.241765  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:23:28.246032  213866 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:23:28.321947  213866 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:23:28.321974  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:23:28.332441  213866 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:23:28.416715  213866 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:23:28.416743  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:23:28.418070  213866 start.go:728] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0816 22:23:28.435330  213866 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:23:28.435355  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:23:28.530604  213866 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:23:28.530683  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:23:28.624980  213866 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:23:28.625011  213866 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:23:28.721013  213866 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:23:29.237769  213866 addons.go:313] Verifying addon metrics-server=true in "old-k8s-version-20210816221528-6487"
	I0816 22:23:26.650225  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:28.651097  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:28.569915  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:31.069733  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:29.633186  213866 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0816 22:23:29.633214  213866 addons.go:344] enableAddons completed in 1.771802579s
	I0816 22:23:29.970660  213866 node_ready.go:58] node "old-k8s-version-20210816221528-6487" has status "Ready":"False"
	I0816 22:23:32.470671  213866 node_ready.go:58] node "old-k8s-version-20210816221528-6487" has status "Ready":"False"
	I0816 22:23:31.150575  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:33.650949  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:36.015986  218005 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (40.459570687s)
	I0816 22:23:36.016042  218005 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:23:36.025388  218005 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 22:23:36.025449  218005 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:23:36.048066  218005 cri.go:76] found id: ""
	I0816 22:23:36.048116  218005 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:23:36.054724  218005 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0816 22:23:36.054776  218005 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:23:36.060924  218005 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:23:36.060960  218005 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0816 22:23:36.321958  218005 out.go:204]   - Generating certificates and keys ...
	I0816 22:23:33.070544  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:35.569576  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:34.970715  213866 node_ready.go:58] node "old-k8s-version-20210816221528-6487" has status "Ready":"False"
	I0816 22:23:37.470636  213866 node_ready.go:58] node "old-k8s-version-20210816221528-6487" has status "Ready":"False"
	I0816 22:23:36.152317  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:38.651287  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:37.198357  218005 out.go:204]   - Booting up control plane ...
	I0816 22:23:37.569853  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:40.069260  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:42.070108  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:39.970854  213866 node_ready.go:58] node "old-k8s-version-20210816221528-6487" has status "Ready":"False"
	I0816 22:23:42.470440  213866 node_ready.go:58] node "old-k8s-version-20210816221528-6487" has status "Ready":"False"
	I0816 22:23:41.152273  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:43.152413  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:44.070784  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:46.570468  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:44.471099  213866 node_ready.go:58] node "old-k8s-version-20210816221528-6487" has status "Ready":"False"
	I0816 22:23:46.970981  213866 node_ready.go:58] node "old-k8s-version-20210816221528-6487" has status "Ready":"False"
	I0816 22:23:45.650685  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:47.651865  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:50.151571  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:50.251329  218005 out.go:204]   - Configuring RBAC rules ...
	I0816 22:23:50.662704  218005 cni.go:93] Creating CNI manager for ""
	I0816 22:23:50.662740  218005 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:23:48.570517  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:51.069641  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:49.470857  213866 node_ready.go:58] node "old-k8s-version-20210816221528-6487" has status "Ready":"False"
	I0816 22:23:51.970879  213866 node_ready.go:58] node "old-k8s-version-20210816221528-6487" has status "Ready":"False"
	I0816 22:23:52.650968  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:55.151574  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:50.664647  218005 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 22:23:50.664709  218005 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0816 22:23:50.668434  218005 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl ...
	I0816 22:23:50.668452  218005 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0816 22:23:50.680365  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 22:23:50.828044  218005 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:23:50.828111  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:50.828111  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=no-preload-20210816221555-6487 minikube.k8s.io/updated_at=2021_08_16T22_23_50_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:50.929668  218005 ops.go:34] apiserver oom_adj: -16
	I0816 22:23:50.929747  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:51.484102  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:51.984386  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:52.483881  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:52.983543  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:53.484311  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:53.983847  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:54.484282  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:54.983636  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:55.484558  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:53.070159  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:55.606653  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:54.470892  213866 node_ready.go:58] node "old-k8s-version-20210816221528-6487" has status "Ready":"False"
	I0816 22:23:56.970780  213866 node_ready.go:58] node "old-k8s-version-20210816221528-6487" has status "Ready":"False"
	I0816 22:23:57.650928  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:00.151897  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:55.984526  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:56.484026  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:56.984449  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:57.483595  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:57.984552  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:58.483696  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:58.984488  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:59.484178  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:59.984161  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:24:00.483560  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:23:58.069766  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:00.569411  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:23:59.470437  213866 node_ready.go:58] node "old-k8s-version-20210816221528-6487" has status "Ready":"False"
	I0816 22:24:01.970230  213866 node_ready.go:58] node "old-k8s-version-20210816221528-6487" has status "Ready":"False"
	I0816 22:24:00.984372  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:24:01.483821  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:24:01.983750  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:24:02.483553  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:24:02.984248  218005 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:24:03.154554  218005 kubeadm.go:985] duration metric: took 12.326511298s to wait for elevateKubeSystemPrivileges.
	I0816 22:24:03.154579  218005 kubeadm.go:392] StartCluster complete in 5m35.191699125s
	I0816 22:24:03.154598  218005 settings.go:142] acquiring lock: {Name:mk71c9e00e0a208f5191d6b85d29a074b46503a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:24:03.154685  218005 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:24:03.156374  218005 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:24:03.671880  218005 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20210816221555-6487" rescaled to 1
	I0816 22:24:03.671985  218005 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:24:03.671991  218005 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0816 22:24:03.673896  218005 out.go:177] * Verifying Kubernetes components...
	I0816 22:24:03.672162  218005 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:24:03.673971  218005 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:24:03.674016  218005 addons.go:59] Setting storage-provisioner=true in profile "no-preload-20210816221555-6487"
	I0816 22:24:03.674036  218005 addons.go:135] Setting addon storage-provisioner=true in "no-preload-20210816221555-6487"
	W0816 22:24:03.674041  218005 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:24:03.674048  218005 addons.go:59] Setting dashboard=true in profile "no-preload-20210816221555-6487"
	I0816 22:24:03.674062  218005 addons.go:59] Setting default-storageclass=true in profile "no-preload-20210816221555-6487"
	I0816 22:24:03.674078  218005 host.go:66] Checking if "no-preload-20210816221555-6487" exists ...
	I0816 22:24:03.674083  218005 addons.go:59] Setting metrics-server=true in profile "no-preload-20210816221555-6487"
	I0816 22:24:03.674090  218005 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20210816221555-6487"
	I0816 22:24:03.674101  218005 addons.go:135] Setting addon metrics-server=true in "no-preload-20210816221555-6487"
	W0816 22:24:03.674112  218005 addons.go:147] addon metrics-server should already be in state true
	I0816 22:24:03.674135  218005 host.go:66] Checking if "no-preload-20210816221555-6487" exists ...
	I0816 22:24:03.674068  218005 addons.go:135] Setting addon dashboard=true in "no-preload-20210816221555-6487"
	W0816 22:24:03.674173  218005 addons.go:147] addon dashboard should already be in state true
	I0816 22:24:03.674208  218005 host.go:66] Checking if "no-preload-20210816221555-6487" exists ...
	I0816 22:24:03.674382  218005 cli_runner.go:115] Run: docker container inspect no-preload-20210816221555-6487 --format={{.State.Status}}
	I0816 22:24:03.672248  218005 config.go:177] Loaded profile config "no-preload-20210816221555-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0816 22:24:03.674622  218005 cli_runner.go:115] Run: docker container inspect no-preload-20210816221555-6487 --format={{.State.Status}}
	I0816 22:24:03.674690  218005 cli_runner.go:115] Run: docker container inspect no-preload-20210816221555-6487 --format={{.State.Status}}
	I0816 22:24:03.674623  218005 cli_runner.go:115] Run: docker container inspect no-preload-20210816221555-6487 --format={{.State.Status}}
	I0816 22:24:03.736692  218005 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:24:03.736813  218005 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:24:03.736823  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:24:03.736874  218005 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210816221555-6487
	I0816 22:24:03.744282  218005 addons.go:135] Setting addon default-storageclass=true in "no-preload-20210816221555-6487"
	W0816 22:24:03.744310  218005 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:24:03.744350  218005 host.go:66] Checking if "no-preload-20210816221555-6487" exists ...
	I0816 22:24:03.744883  218005 cli_runner.go:115] Run: docker container inspect no-preload-20210816221555-6487 --format={{.State.Status}}
	I0816 22:24:03.748946  218005 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:24:03.750352  218005 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:24:03.750418  218005 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:24:03.750430  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:24:03.750478  218005 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210816221555-6487
	I0816 22:24:02.650397  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:04.651301  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:03.751840  218005 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:24:03.751945  218005 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:24:03.751957  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:24:03.752009  218005 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210816221555-6487
	I0816 22:24:03.755193  218005 node_ready.go:35] waiting up to 6m0s for node "no-preload-20210816221555-6487" to be "Ready" ...
	I0816 22:24:03.755358  218005 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:24:03.761862  218005 node_ready.go:49] node "no-preload-20210816221555-6487" has status "Ready":"True"
	I0816 22:24:03.761880  218005 node_ready.go:38] duration metric: took 6.660997ms waiting for node "no-preload-20210816221555-6487" to be "Ready" ...
	I0816 22:24:03.761892  218005 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:24:03.768294  218005 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-pq6qg" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:03.801770  218005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32934 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816221555-6487/id_rsa Username:docker}
	I0816 22:24:03.802512  218005 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:24:03.802535  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:24:03.802590  218005 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210816221555-6487
	I0816 22:24:03.809212  218005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32934 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816221555-6487/id_rsa Username:docker}
	I0816 22:24:03.812103  218005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32934 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816221555-6487/id_rsa Username:docker}
	I0816 22:24:03.857686  218005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32934 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816221555-6487/id_rsa Username:docker}
	I0816 22:24:04.027691  218005 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:24:04.027719  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:24:04.029430  218005 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:24:04.029747  218005 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:24:04.029767  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:24:04.043293  218005 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:24:04.043322  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:24:04.112589  218005 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:24:04.112614  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:24:04.136322  218005 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:24:04.136350  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:24:04.221098  218005 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:24:04.221124  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:24:04.225575  218005 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:24:04.234235  218005 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:24:04.236543  218005 start.go:728] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0816 22:24:04.240565  218005 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:24:04.240591  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:24:04.332986  218005 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:24:04.333013  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:24:04.512532  218005 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:24:04.512619  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:24:04.614597  218005 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:24:04.614623  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:24:04.639772  218005 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:24:04.639802  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:24:04.732668  218005 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:24:04.732700  218005 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:24:04.830142  218005 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:24:05.440517  218005 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.214895018s)
	I0816 22:24:05.440580  218005 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.20631443s)
	I0816 22:24:05.440620  218005 addons.go:313] Verifying addon metrics-server=true in "no-preload-20210816221555-6487"
	I0816 22:24:05.914058  218005 pod_ready.go:102] pod "coredns-78fcd69978-pq6qg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:06.519460  218005 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.68926584s)
	I0816 22:24:02.571679  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:05.070253  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:04.470734  213866 node_ready.go:58] node "old-k8s-version-20210816221528-6487" has status "Ready":"False"
	I0816 22:24:06.470971  213866 node_ready.go:49] node "old-k8s-version-20210816221528-6487" has status "Ready":"True"
	I0816 22:24:06.471000  213866 node_ready.go:38] duration metric: took 38.506620086s waiting for node "old-k8s-version-20210816221528-6487" to be "Ready" ...
	I0816 22:24:06.471013  213866 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:24:06.474131  213866 pod_ready.go:78] waiting up to 6m0s for pod "coredns-fb8b8dccf-7z27q" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:08.523994  213866 pod_ready.go:102] pod "coredns-fb8b8dccf-7z27q" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:06.651526  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:08.651579  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:06.521319  218005 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0816 22:24:06.521345  218005 addons.go:344] enableAddons completed in 2.849194274s
	I0816 22:24:08.280217  218005 pod_ready.go:102] pod "coredns-78fcd69978-pq6qg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:09.277642  218005 pod_ready.go:97] error getting pod "coredns-78fcd69978-pq6qg" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-pq6qg" not found
	I0816 22:24:09.277677  218005 pod_ready.go:81] duration metric: took 5.509353036s waiting for pod "coredns-78fcd69978-pq6qg" in "kube-system" namespace to be "Ready" ...
	E0816 22:24:09.277690  218005 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-78fcd69978-pq6qg" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-pq6qg" not found
	I0816 22:24:09.277699  218005 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-zmc4x" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:09.824157  218005 pod_ready.go:92] pod "coredns-78fcd69978-zmc4x" in "kube-system" namespace has status "Ready":"True"
	I0816 22:24:09.824190  218005 pod_ready.go:81] duration metric: took 546.47538ms waiting for pod "coredns-78fcd69978-zmc4x" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:09.824204  218005 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20210816221555-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:10.838486  218005 pod_ready.go:92] pod "etcd-no-preload-20210816221555-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:24:10.838510  218005 pod_ready.go:81] duration metric: took 1.014297972s waiting for pod "etcd-no-preload-20210816221555-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:10.838528  218005 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20210816221555-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:10.844445  218005 pod_ready.go:92] pod "kube-apiserver-no-preload-20210816221555-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:24:10.844470  218005 pod_ready.go:81] duration metric: took 5.932696ms waiting for pod "kube-apiserver-no-preload-20210816221555-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:10.844485  218005 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20210816221555-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:10.849958  218005 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210816221555-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:24:10.849978  218005 pod_ready.go:81] duration metric: took 5.485285ms waiting for pod "kube-controller-manager-no-preload-20210816221555-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:10.849991  218005 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-82g44" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:10.859397  218005 pod_ready.go:92] pod "kube-proxy-82g44" in "kube-system" namespace has status "Ready":"True"
	I0816 22:24:10.859417  218005 pod_ready.go:81] duration metric: took 9.418559ms waiting for pod "kube-proxy-82g44" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:10.859429  218005 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20210816221555-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:11.079662  218005 pod_ready.go:92] pod "kube-scheduler-no-preload-20210816221555-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:24:11.079685  218005 pod_ready.go:81] duration metric: took 220.246797ms waiting for pod "kube-scheduler-no-preload-20210816221555-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:11.079695  218005 pod_ready.go:38] duration metric: took 7.317786525s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:24:11.079716  218005 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:24:11.079760  218005 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:24:11.138671  218005 api_server.go:70] duration metric: took 7.466643672s to wait for apiserver process to appear ...
	I0816 22:24:11.138701  218005 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:24:11.138714  218005 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:24:11.144121  218005 api_server.go:265] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0816 22:24:11.145010  218005 api_server.go:139] control plane version: v1.22.0-rc.0
	I0816 22:24:11.145031  218005 api_server.go:129] duration metric: took 6.323339ms to wait for apiserver health ...
	I0816 22:24:11.145040  218005 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:24:11.281080  218005 system_pods.go:59] 9 kube-system pods found
	I0816 22:24:11.281116  218005 system_pods.go:61] "coredns-78fcd69978-zmc4x" [1fc66fbb-952d-43b5-af77-f7551a8ed70e] Running
	I0816 22:24:11.281123  218005 system_pods.go:61] "etcd-no-preload-20210816221555-6487" [43927863-2c25-418d-a2a0-af7a6c1c475d] Running
	I0816 22:24:11.281130  218005 system_pods.go:61] "kindnet-pz7lz" [0e675e1e-c1e4-4ed5-b148-1b64d0933e1d] Running
	I0816 22:24:11.281136  218005 system_pods.go:61] "kube-apiserver-no-preload-20210816221555-6487" [1092f52d-df2b-42a1-850b-93c80e4f8146] Running
	I0816 22:24:11.281142  218005 system_pods.go:61] "kube-controller-manager-no-preload-20210816221555-6487" [0f84e622-668e-4d39-a6f9-d165fe87089e] Running
	I0816 22:24:11.281147  218005 system_pods.go:61] "kube-proxy-82g44" [80dd61db-1545-4dc7-bd88-00ae47943849] Running
	I0816 22:24:11.281154  218005 system_pods.go:61] "kube-scheduler-no-preload-20210816221555-6487" [8a491c0e-1fdf-4a83-a89f-5d5497f54377] Running
	I0816 22:24:11.281166  218005 system_pods.go:61] "metrics-server-7c784ccb57-b466w" [4161efc3-7c01-456e-b9d5-6c09ca70c1f9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:24:11.281177  218005 system_pods.go:61] "storage-provisioner" [40ef7855-e8e1-4106-9694-5bee902ec410] Running
	I0816 22:24:11.281186  218005 system_pods.go:74] duration metric: took 136.138888ms to wait for pod list to return data ...
	I0816 22:24:11.281198  218005 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:24:11.479313  218005 default_sa.go:45] found service account: "default"
	I0816 22:24:11.479340  218005 default_sa.go:55] duration metric: took 198.135098ms for default service account to be created ...
	I0816 22:24:11.479350  218005 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:24:11.682753  218005 system_pods.go:86] 9 kube-system pods found
	I0816 22:24:11.682787  218005 system_pods.go:89] "coredns-78fcd69978-zmc4x" [1fc66fbb-952d-43b5-af77-f7551a8ed70e] Running
	I0816 22:24:11.682795  218005 system_pods.go:89] "etcd-no-preload-20210816221555-6487" [43927863-2c25-418d-a2a0-af7a6c1c475d] Running
	I0816 22:24:11.682813  218005 system_pods.go:89] "kindnet-pz7lz" [0e675e1e-c1e4-4ed5-b148-1b64d0933e1d] Running
	I0816 22:24:11.682821  218005 system_pods.go:89] "kube-apiserver-no-preload-20210816221555-6487" [1092f52d-df2b-42a1-850b-93c80e4f8146] Running
	I0816 22:24:11.682829  218005 system_pods.go:89] "kube-controller-manager-no-preload-20210816221555-6487" [0f84e622-668e-4d39-a6f9-d165fe87089e] Running
	I0816 22:24:11.682835  218005 system_pods.go:89] "kube-proxy-82g44" [80dd61db-1545-4dc7-bd88-00ae47943849] Running
	I0816 22:24:11.682842  218005 system_pods.go:89] "kube-scheduler-no-preload-20210816221555-6487" [8a491c0e-1fdf-4a83-a89f-5d5497f54377] Running
	I0816 22:24:11.682860  218005 system_pods.go:89] "metrics-server-7c784ccb57-b466w" [4161efc3-7c01-456e-b9d5-6c09ca70c1f9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:24:11.682873  218005 system_pods.go:89] "storage-provisioner" [40ef7855-e8e1-4106-9694-5bee902ec410] Running
	I0816 22:24:11.682882  218005 system_pods.go:126] duration metric: took 203.52527ms to wait for k8s-apps to be running ...
	I0816 22:24:11.682895  218005 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:24:11.682942  218005 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:24:11.720433  218005 system_svc.go:56] duration metric: took 37.530414ms WaitForService to wait for kubelet.
	I0816 22:24:11.720464  218005 kubeadm.go:547] duration metric: took 8.048442729s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:24:11.720494  218005 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:24:11.878128  218005 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:24:11.878155  218005 node_conditions.go:123] node cpu capacity is 8
	I0816 22:24:11.878169  218005 node_conditions.go:105] duration metric: took 157.669186ms to run NodePressure ...
	I0816 22:24:11.878180  218005 start.go:231] waiting for startup goroutines ...
	I0816 22:24:11.931309  218005 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0816 22:24:11.933325  218005 out.go:177] 
	W0816 22:24:11.933509  218005 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0816 22:24:11.934987  218005 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0816 22:24:11.936447  218005 out.go:177] * Done! kubectl is now configured to use "no-preload-20210816221555-6487" cluster and "default" namespace by default
	I0816 22:24:07.570467  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:09.570953  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:11.577640  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:10.981764  213866 pod_ready.go:102] pod "coredns-fb8b8dccf-7z27q" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:13.480977  213866 pod_ready.go:102] pod "coredns-fb8b8dccf-7z27q" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:11.153078  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:13.651464  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:14.071289  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:16.570307  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:15.980670  213866 pod_ready.go:102] pod "coredns-fb8b8dccf-7z27q" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:17.980751  213866 pod_ready.go:102] pod "coredns-fb8b8dccf-7z27q" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:16.151684  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:18.651272  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:19.070575  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:21.569636  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:19.981831  213866 pod_ready.go:102] pod "coredns-fb8b8dccf-7z27q" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:22.481224  213866 pod_ready.go:102] pod "coredns-fb8b8dccf-7z27q" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:23.480538  213866 pod_ready.go:92] pod "coredns-fb8b8dccf-7z27q" in "kube-system" namespace has status "Ready":"True"
	I0816 22:24:23.480578  213866 pod_ready.go:81] duration metric: took 17.00642622s waiting for pod "coredns-fb8b8dccf-7z27q" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:23.480592  213866 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-20210816221528-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:23.484151  213866 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-20210816221528-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:24:23.484166  213866 pod_ready.go:81] duration metric: took 3.564442ms waiting for pod "kube-controller-manager-old-k8s-version-20210816221528-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:23.484176  213866 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9w5rw" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:23.487349  213866 pod_ready.go:92] pod "kube-proxy-9w5rw" in "kube-system" namespace has status "Ready":"True"
	I0816 22:24:23.487362  213866 pod_ready.go:81] duration metric: took 3.179929ms waiting for pod "kube-proxy-9w5rw" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:23.487370  213866 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace to be "Ready" ...
	I0816 22:24:21.151011  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:23.151110  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:25.151266  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:24.069317  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:26.069716  238595 pod_ready.go:102] pod "metrics-server-7c784ccb57-z8svs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:25.495411  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:27.495624  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:27.152196  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:29.651278  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Mon 2021-08-16 22:18:22 UTC, end at Mon 2021-08-16 22:24:31 UTC. --
	Aug 16 22:24:12 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:12.917046259Z" level=info msg="Starting container: 72c39da7e58ac47e06b5da8502c699b70cc8cc4c51a04729d8b0dd1fa692c90a" id=b180841a-def6-4f50-8195-77c2912b1592 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:24:12 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:12.925119819Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6\""
	Aug 16 22:24:12 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:12.944251948Z" level=info msg="Started container 72c39da7e58ac47e06b5da8502c699b70cc8cc4c51a04729d8b0dd1fa692c90a: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-9ndkn/dashboard-metrics-scraper" id=b180841a-def6-4f50-8195-77c2912b1592 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:24:13 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:13.775404968Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=9dcf1e44-3402-4a2e-9b54-99ce62ef81b4 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:24:13 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:13.776867007Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=9dcf1e44-3402-4a2e-9b54-99ce62ef81b4 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:24:13 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:13.777419165Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=4a6eaa66-a3d2-4d6f-aab9-bce2aaeacb38 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:24:13 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:13.779215665Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=4a6eaa66-a3d2-4d6f-aab9-bce2aaeacb38 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:24:13 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:13.780008025Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-9ndkn/dashboard-metrics-scraper" id=959da50f-9b05-4360-8567-bbc69e0d4780 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:24:14 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:14.045478122Z" level=info msg="Created container dd8fdf37b52d4120ed233eea8cefb706b1179849b55c8f08029c1420d88e0310: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-9ndkn/dashboard-metrics-scraper" id=959da50f-9b05-4360-8567-bbc69e0d4780 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:24:14 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:14.046025171Z" level=info msg="Starting container: dd8fdf37b52d4120ed233eea8cefb706b1179849b55c8f08029c1420d88e0310" id=481ef33f-e430-4e81-949f-ff4c7fac0f00 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:24:14 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:14.070205204Z" level=info msg="Started container dd8fdf37b52d4120ed233eea8cefb706b1179849b55c8f08029c1420d88e0310: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-9ndkn/dashboard-metrics-scraper" id=481ef33f-e430-4e81-949f-ff4c7fac0f00 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:24:14 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:14.779075090Z" level=info msg="Removing container: 72c39da7e58ac47e06b5da8502c699b70cc8cc4c51a04729d8b0dd1fa692c90a" id=34e063b5-ea66-4729-a886-06cb979698fa name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 16 22:24:14 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:14.817302142Z" level=info msg="Removed container 72c39da7e58ac47e06b5da8502c699b70cc8cc4c51a04729d8b0dd1fa692c90a: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-9ndkn/dashboard-metrics-scraper" id=34e063b5-ea66-4729-a886-06cb979698fa name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 16 22:24:15 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:15.124090294Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f" id=63807842-46e2-4e08-be66-ead0fe3759c4 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 16 22:24:15 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:15.124813846Z" level=info msg="Checking image status: kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6" id=bc4fd296-e7a8-419d-8c61-2ce1cf80b966 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:24:15 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:15.125548140Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,RepoTags:[docker.io/kubernetesui/dashboard:v2.1.0],RepoDigests:[docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 docker.io/kubernetesui/dashboard@sha256:8cd877c1c0909bdd50043edc18b89cfbbf0614a57893ebf59b6bd1ddb5419323],Size_:228529574,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=bc4fd296-e7a8-419d-8c61-2ce1cf80b966 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:24:15 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:15.126354416Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-v5svh/kubernetes-dashboard" id=85543814-c253-4c18-860c-2bec804632b3 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:24:15 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:15.138353366Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/95aa46e0eac4db6daa4228516237d20080904e346393968b05f26fdd79d26dd8/merged/etc/group: no such file or directory"
	Aug 16 22:24:15 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:15.280472912Z" level=info msg="Created container 17b741f1cf88828974c837b2de6dc1a07c30943a1b7fc0823246d4f3ece9069c: kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-v5svh/kubernetes-dashboard" id=85543814-c253-4c18-860c-2bec804632b3 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:24:15 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:15.280949622Z" level=info msg="Starting container: 17b741f1cf88828974c837b2de6dc1a07c30943a1b7fc0823246d4f3ece9069c" id=97b547a4-b4bc-44a7-8dc4-5d704a05015d name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:24:15 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:15.290668631Z" level=info msg="Started container 17b741f1cf88828974c837b2de6dc1a07c30943a1b7fc0823246d4f3ece9069c: kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-v5svh/kubernetes-dashboard" id=97b547a4-b4bc-44a7-8dc4-5d704a05015d name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:24:17 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:17.638042079Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=b6fcf088-1667-43f4-a1c9-b79df0a6d050 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:24:17 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:17.638323260Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=b6fcf088-1667-43f4-a1c9-b79df0a6d050 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:24:17 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:17.638785902Z" level=info msg="Pulling image: fake.domain/k8s.gcr.io/echoserver:1.4" id=97ab51d1-d3e5-4d91-8d6a-5f680867434f name=/runtime.v1alpha2.ImageService/PullImage
	Aug 16 22:24:17 no-preload-20210816221555-6487 crio[243]: time="2021-08-16 22:24:17.648136288Z" level=info msg="Trying to access \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID
	17b741f1cf888       docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f   16 seconds ago      Running             kubernetes-dashboard        0                   1f01bfe7457f4
	dd8fdf37b52d4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   1                   a045f061f4245
	03ab9f1a46282       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         0                   d731ce911be4f
	7f16dde1fc9b1       8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44                                           25 seconds ago      Running             coredns                     0                   2e113946c5232
	4499305eb0f7f       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb                                           26 seconds ago      Running             kindnet-cni                 0                   61aa9e31874d4
	e919cbdfb2443       ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c                                           27 seconds ago      Running             kube-proxy                  0                   692901f61961d
	43dbac811c6be       7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75                                           47 seconds ago      Running             kube-scheduler              2                   08b2a93f79987
	3b6e550318532       cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c                                           47 seconds ago      Running             kube-controller-manager     2                   b475757a12a42
	a5dbf4c341ee4       b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a                                           47 seconds ago      Running             kube-apiserver              2                   9518e23dd5b9b
	5f39439703b10       0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba                                           47 seconds ago      Running             etcd                        2                   4983a9bf59823
	
	* 
	* ==> coredns [7f16dde1fc9b15e0c5a936ad881565e64ec0797e071ca0f3615f75be1d7a7ba5] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20210816221555-6487
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20210816221555-6487
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
	                    minikube.k8s.io/name=no-preload-20210816221555-6487
	                    minikube.k8s.io/updated_at=2021_08_16T22_23_50_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Aug 2021 22:23:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20210816221555-6487
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Aug 2021 22:24:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Aug 2021 22:24:25 +0000   Mon, 16 Aug 2021 22:23:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Aug 2021 22:24:25 +0000   Mon, 16 Aug 2021 22:23:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Aug 2021 22:24:25 +0000   Mon, 16 Aug 2021 22:23:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Aug 2021 22:24:25 +0000   Mon, 16 Aug 2021 22:24:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    no-preload-20210816221555-6487
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                caddf44c-6818-4116-a33d-8b1403a4962e
	  Boot ID:                    fb7b5690-fedc-46af-96ea-1f6e59faa09d
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.22.0-rc.0
	  Kube-Proxy Version:         v1.22.0-rc.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-78fcd69978-zmc4x                                  100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     28s
	  kube-system                 etcd-no-preload-20210816221555-6487                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         35s
	  kube-system                 kindnet-pz7lz                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      29s
	  kube-system                 kube-apiserver-no-preload-20210816221555-6487             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  kube-system                 kube-controller-manager-no-preload-20210816221555-6487    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  kube-system                 kube-proxy-82g44                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	  kube-system                 kube-scheduler-no-preload-20210816221555-6487             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  kube-system                 metrics-server-7c784ccb57-b466w                           100m (1%!)(MISSING)     0 (0%!)(MISSING)      300Mi (0%!)(MISSING)       0 (0%!)(MISSING)         26s
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kubernetes-dashboard        dashboard-metrics-scraper-8685c45546-9ndkn                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  kubernetes-dashboard        kubernetes-dashboard-6fcdf4f6d-v5svh                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             520Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 49s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  49s (x4 over 49s)  kubelet  Node no-preload-20210816221555-6487 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s (x4 over 49s)  kubelet  Node no-preload-20210816221555-6487 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s (x4 over 49s)  kubelet  Node no-preload-20210816221555-6487 status is now: NodeHasSufficientPID
	  Normal  Starting                 36s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s                kubelet  Node no-preload-20210816221555-6487 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s                kubelet  Node no-preload-20210816221555-6487 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s                kubelet  Node no-preload-20210816221555-6487 status is now: NodeHasSufficientPID
	  Normal  NodeReady                29s                kubelet  Node no-preload-20210816221555-6487 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000004] ll header: 00000000: 02 42 d5 d5 90 49 02 42 c0 a8 3a 02 08 00        .B...I.B..:...
	[  +2.050041] IPv4: martian source 10.244.0.4 from 10.96.0.1, on dev br-98b3ee991257
	[  +0.000002] ll header: 00000000: 02 42 b1 3b 84 51 02 42 c0 a8 43 02 08 00        .B.;.Q.B..C...
	[ +12.284935] IPv4: martian source 10.244.0.2 from 10.96.0.1, on dev br-4ed2783b447d
	[  +0.000002] ll header: 00000000: 02 42 d5 d5 90 49 02 42 c0 a8 3a 02 08 00        .B...I.B..:...
	[Aug16 22:24] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth56124561
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff be 79 e6 a5 9f 5c 08 06        .......y...\..
	[  +0.399250] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev veth4983c387
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff fa 17 8f 85 e1 65 08 06        ...........e..
	[  +1.600259] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethb3548ee1
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff fa 2f f8 2f f9 94 08 06        ......././....
	[  +0.559633] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev veth46787027
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 62 c0 9d 8e b4 af 08 06        ......b.......
	[  +0.039851] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethf7560af3
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff f2 14 37 d6 50 f8 08 06        ........7.P...
	[  +0.363946] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethf46edfa1
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 d9 41 00 33 05 08 06        ........A.3...
	[  +0.104031] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev vethfcc96108
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ba 37 d4 b4 15 b7 08 06        .......7......
	[  +0.594832] IPv4: martian source 10.244.0.4 from 10.96.0.1, on dev br-98b3ee991257
	[  +0.000002] ll header: 00000000: 02 42 b1 3b 84 51 02 42 c0 a8 43 02 08 00        .B.;.Q.B..C...
	[  +0.885371] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethb5a2c402
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ea d4 10 cd 8e c5 08 06        ..............
	[  +0.101530] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth85ef4cf2
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 32 50 31 fa e3 5b 08 06        ......2P1..[..
	
	* 
	* ==> etcd [5f39439703b1098bcb2c996d36f81db6b863d3bc7dda1a069509a87e2ff0a3b1] <==
	* {"level":"info","ts":"2021-08-16T22:23:43.913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2021-08-16T22:23:43.914Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2021-08-16T22:23:43.916Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-08-16T22:23:43.916Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2021-08-16T22:23:43.916Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2021-08-16T22:23:43.916Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-08-16T22:23:43.916Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-08-16T22:23:44.337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2021-08-16T22:23:44.337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2021-08-16T22:23:44.337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2021-08-16T22:23:44.337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2021-08-16T22:23:44.337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2021-08-16T22:23:44.337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2021-08-16T22:23:44.337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2021-08-16T22:23:44.337Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-16T22:23:44.338Z","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-16T22:23:44.338Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-16T22:23:44.339Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-16T22:23:44.339Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-16T22:23:44.339Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:no-preload-20210816221555-6487 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2021-08-16T22:23:44.339Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-16T22:23:44.339Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-08-16T22:23:44.339Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-08-16T22:23:44.340Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2021-08-16T22:23:44.340Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  22:24:31 up  1:03,  0 users,  load average: 1.15, 2.13, 2.12
	Linux no-preload-20210816221555-6487 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [a5dbf4c341ee400d40847a5f8d81d87f6ae62104bf8f40f5c10b88c4913deb64] <==
	* I0816 22:23:47.427127       1 apf_controller.go:304] Running API Priority and Fairness config worker
	I0816 22:23:47.427493       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0816 22:23:47.428837       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0816 22:23:47.432459       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0816 22:23:47.435196       1 controller.go:611] quota admission added evaluator for: namespaces
	I0816 22:23:48.325444       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0816 22:23:48.325470       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0816 22:23:48.333617       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0816 22:23:48.336304       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0816 22:23:48.336326       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0816 22:23:48.673414       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 22:23:48.716181       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0816 22:23:48.834775       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0816 22:23:48.835511       1 controller.go:611] quota admission added evaluator for: endpoints
	I0816 22:23:48.838634       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0816 22:23:49.368623       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0816 22:23:50.424799       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0816 22:23:50.457048       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0816 22:23:55.619243       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 22:24:02.873090       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0816 22:24:02.922366       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	W0816 22:24:07.935138       1 handler_proxy.go:104] no RequestInfo found in the context
	E0816 22:24:07.935229       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0816 22:24:07.935245       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [3b6e5503185321aa0b9f2a8dd00d97f87a6d5995e4ead91d0eb34f104511e1c1] <==
	* I0816 22:24:05.119447       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0816 22:24:05.133798       1 replica_set.go:536] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0816 22:24:05.232522       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-b466w"
	I0816 22:24:05.814201       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0816 22:24:05.924559       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:24:06.015140       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:24:06.015521       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	E0816 22:24:06.024070       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:24:06.024233       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:24:06.025625       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:24:06.030476       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:24:06.030549       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:24:06.032073       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:24:06.037969       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:24:06.038204       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:24:06.038230       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:24:06.038244       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:24:06.116752       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:24:06.116765       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:24:06.126219       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:24:06.126966       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:24:06.212510       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:24:06.212628       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:24:06.222346       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-9ndkn"
	I0816 22:24:06.322689       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-v5svh"
	
	* 
	* ==> kube-proxy [e919cbdfb244328186a80d3e1a9645c58a3901f4e9bbfcf04ae307ef0d568d5c] <==
	* I0816 22:24:04.125483       1 node.go:172] Successfully retrieved node IP: 192.168.67.2
	I0816 22:24:04.125545       1 server_others.go:140] Detected node IP 192.168.67.2
	W0816 22:24:04.125576       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0816 22:24:04.239777       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0816 22:24:04.239817       1 server_others.go:212] Using iptables Proxier.
	I0816 22:24:04.239831       1 server_others.go:219] creating dualStackProxier for iptables.
	W0816 22:24:04.239848       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0816 22:24:04.240307       1 server.go:649] Version: v1.22.0-rc.0
	I0816 22:24:04.243470       1 config.go:315] Starting service config controller
	I0816 22:24:04.243495       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:24:04.243513       1 config.go:224] Starting endpoint slice config controller
	I0816 22:24:04.243517       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	E0816 22:24:04.320692       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"no-preload-20210816221555-6487.169be9b2c501819d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03ed7410e7f54ae, ext:327495362, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-no-preload-20210816221555-6487", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Name:
"no-preload-20210816221555-6487", UID:"no-preload-20210816221555-6487", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "no-preload-20210816221555-6487.169be9b2c501819d" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0816 22:24:04.343806       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0816 22:24:04.343865       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [43dbac811c6beba2363524514bdb89ffacc43063e93d24959fe2698b532d9852] <==
	* W0816 22:23:47.349574       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 22:23:47.431614       1 secure_serving.go:195] Serving securely on 127.0.0.1:10259
	I0816 22:23:47.431707       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 22:23:47.431738       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 22:23:47.431760       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0816 22:23:47.433230       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:23:47.433308       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 22:23:47.434256       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 22:23:47.434488       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:23:47.435645       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:47.436095       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:23:47.436208       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:47.436294       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:23:47.436366       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:47.436444       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:23:47.436512       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:23:47.436592       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:23:47.436663       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:47.436749       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:23:47.436886       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:23:48.350685       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 22:23:48.377694       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:48.385659       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:23:48.492379       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0816 22:23:51.532734       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:18:22 UTC, end at Mon 2021-08-16 22:24:31 UTC. --
	Aug 16 22:24:06 no-preload-20210816221555-6487 kubelet[4408]: I0816 22:24:06.752546    4408 scope.go:110] "RemoveContainer" containerID="d75e996c27de0384d75e01b0a14e81df310c962987f7b73fdc18bb5e107b36d3"
	Aug 16 22:24:06 no-preload-20210816221555-6487 kubelet[4408]: E0816 22:24:06.753087    4408 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d75e996c27de0384d75e01b0a14e81df310c962987f7b73fdc18bb5e107b36d3\": container with ID starting with d75e996c27de0384d75e01b0a14e81df310c962987f7b73fdc18bb5e107b36d3 not found: ID does not exist" containerID="d75e996c27de0384d75e01b0a14e81df310c962987f7b73fdc18bb5e107b36d3"
	Aug 16 22:24:06 no-preload-20210816221555-6487 kubelet[4408]: I0816 22:24:06.753143    4408 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:d75e996c27de0384d75e01b0a14e81df310c962987f7b73fdc18bb5e107b36d3} err="failed to get container status \"d75e996c27de0384d75e01b0a14e81df310c962987f7b73fdc18bb5e107b36d3\": rpc error: code = NotFound desc = could not find container \"d75e996c27de0384d75e01b0a14e81df310c962987f7b73fdc18bb5e107b36d3\": container with ID starting with d75e996c27de0384d75e01b0a14e81df310c962987f7b73fdc18bb5e107b36d3 not found: ID does not exist"
	Aug 16 22:24:06 no-preload-20210816221555-6487 kubelet[4408]: I0816 22:24:06.812212    4408 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19540592-5a1c-41ae-bf8a-67a910086cad-kube-api-access-8pj5s" (OuterVolumeSpecName: "kube-api-access-8pj5s") pod "19540592-5a1c-41ae-bf8a-67a910086cad" (UID: "19540592-5a1c-41ae-bf8a-67a910086cad"). InnerVolumeSpecName "kube-api-access-8pj5s". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 16 22:24:06 no-preload-20210816221555-6487 kubelet[4408]: I0816 22:24:06.831959    4408 reconciler.go:319] "Volume detached for volume \"kube-api-access-8pj5s\" (UniqueName: \"kubernetes.io/projected/19540592-5a1c-41ae-bf8a-67a910086cad-kube-api-access-8pj5s\") on node \"no-preload-20210816221555-6487\" DevicePath \"\""
	Aug 16 22:24:06 no-preload-20210816221555-6487 kubelet[4408]: I0816 22:24:06.832004    4408 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19540592-5a1c-41ae-bf8a-67a910086cad-config-volume\") on node \"no-preload-20210816221555-6487\" DevicePath \"\""
	Aug 16 22:24:09 no-preload-20210816221555-6487 kubelet[4408]: I0816 22:24:09.638437    4408 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=19540592-5a1c-41ae-bf8a-67a910086cad path="/var/lib/kubelet/pods/19540592-5a1c-41ae-bf8a-67a910086cad/volumes"
	Aug 16 22:24:13 no-preload-20210816221555-6487 kubelet[4408]: I0816 22:24:13.774849    4408 scope.go:110] "RemoveContainer" containerID="72c39da7e58ac47e06b5da8502c699b70cc8cc4c51a04729d8b0dd1fa692c90a"
	Aug 16 22:24:14 no-preload-20210816221555-6487 kubelet[4408]: I0816 22:24:14.778178    4408 scope.go:110] "RemoveContainer" containerID="72c39da7e58ac47e06b5da8502c699b70cc8cc4c51a04729d8b0dd1fa692c90a"
	Aug 16 22:24:14 no-preload-20210816221555-6487 kubelet[4408]: I0816 22:24:14.778336    4408 scope.go:110] "RemoveContainer" containerID="dd8fdf37b52d4120ed233eea8cefb706b1179849b55c8f08029c1420d88e0310"
	Aug 16 22:24:14 no-preload-20210816221555-6487 kubelet[4408]: E0816 22:24:14.778696    4408 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-9ndkn_kubernetes-dashboard(0220ee63-e330-4c8e-a161-dda26dae3ebb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-9ndkn" podUID=0220ee63-e330-4c8e-a161-dda26dae3ebb
	Aug 16 22:24:15 no-preload-20210816221555-6487 kubelet[4408]: I0816 22:24:15.782121    4408 scope.go:110] "RemoveContainer" containerID="dd8fdf37b52d4120ed233eea8cefb706b1179849b55c8f08029c1420d88e0310"
	Aug 16 22:24:15 no-preload-20210816221555-6487 kubelet[4408]: E0816 22:24:15.782340    4408 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-9ndkn_kubernetes-dashboard(0220ee63-e330-4c8e-a161-dda26dae3ebb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-9ndkn" podUID=0220ee63-e330-4c8e-a161-dda26dae3ebb
	Aug 16 22:24:16 no-preload-20210816221555-6487 kubelet[4408]: E0816 22:24:16.038831    4408 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/65a501908096112bb2675466c1dd3aa24ab71cbe70391ffecee870c2dc6b0aa2/docker/65a501908096112bb2675466c1dd3aa24ab71cbe70391ffecee870c2dc6b0aa2\": RecentStats: unable to find data in memory cache], [\"/system.slice/crio-e919cbdfb244328186a80d3e1a9645c58a3901f4e9bbfcf04ae307ef0d568d5c.scope\": RecentStats: unable to find data in memory cache]"
	Aug 16 22:24:16 no-preload-20210816221555-6487 kubelet[4408]: I0816 22:24:16.783790    4408 scope.go:110] "RemoveContainer" containerID="dd8fdf37b52d4120ed233eea8cefb706b1179849b55c8f08029c1420d88e0310"
	Aug 16 22:24:16 no-preload-20210816221555-6487 kubelet[4408]: E0816 22:24:16.784062    4408 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-9ndkn_kubernetes-dashboard(0220ee63-e330-4c8e-a161-dda26dae3ebb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-9ndkn" podUID=0220ee63-e330-4c8e-a161-dda26dae3ebb
	Aug 16 22:24:17 no-preload-20210816221555-6487 kubelet[4408]: E0816 22:24:17.653025    4408 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:24:17 no-preload-20210816221555-6487 kubelet[4408]: E0816 22:24:17.653075    4408 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:24:17 no-preload-20210816221555-6487 kubelet[4408]: E0816 22:24:17.653231    4408 kuberuntime_manager.go:895] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-kg478,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{
Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]Vol
umeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-b466w_kube-system(4161efc3-7c01-456e-b9d5-6c09ca70c1f9): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
	Aug 16 22:24:17 no-preload-20210816221555-6487 kubelet[4408]: E0816 22:24:17.653281    4408 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-b466w" podUID=4161efc3-7c01-456e-b9d5-6c09ca70c1f9
	Aug 16 22:24:26 no-preload-20210816221555-6487 kubelet[4408]: E0816 22:24:26.064159    4408 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/65a501908096112bb2675466c1dd3aa24ab71cbe70391ffecee870c2dc6b0aa2/docker/65a501908096112bb2675466c1dd3aa24ab71cbe70391ffecee870c2dc6b0aa2\": RecentStats: unable to find data in memory cache]"
	Aug 16 22:24:27 no-preload-20210816221555-6487 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 16 22:24:27 no-preload-20210816221555-6487 kubelet[4408]: I0816 22:24:27.064587    4408 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Aug 16 22:24:27 no-preload-20210816221555-6487 systemd[1]: kubelet.service: Succeeded.
	Aug 16 22:24:27 no-preload-20210816221555-6487 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [17b741f1cf88828974c837b2de6dc1a07c30943a1b7fc0823246d4f3ece9069c] <==
	* 2021/08/16 22:24:15 Using namespace: kubernetes-dashboard
	2021/08/16 22:24:15 Using in-cluster config to connect to apiserver
	2021/08/16 22:24:15 Using secret token for csrf signing
	2021/08/16 22:24:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/16 22:24:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/16 22:24:15 Successful initial request to the apiserver, version: v1.22.0-rc.0
	2021/08/16 22:24:15 Generating JWE encryption key
	2021/08/16 22:24:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/16 22:24:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/16 22:24:15 Initializing JWE encryption key from synchronized object
	2021/08/16 22:24:15 Creating in-cluster Sidecar client
	2021/08/16 22:24:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:24:15 Serving insecurely on HTTP port: 9090
	2021/08/16 22:24:15 Starting overwatch
	
	* 
	* ==> storage-provisioner [03ab9f1a4628206cf1e1ca0b6d15e457fa8e4988879154f8fb91512b2a4e77c6] <==
	* I0816 22:24:06.746837       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 22:24:06.757243       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 22:24:06.757285       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 22:24:06.821358       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 22:24:06.821520       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"058f8982-ac05-44d6-bc85-3a80d87b7013", APIVersion:"v1", ResourceVersion:"595", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-20210816221555-6487_f3ee5552-f11a-48aa-86ba-0b1889bd4f26 became leader
	I0816 22:24:06.821569       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-20210816221555-6487_f3ee5552-f11a-48aa-86ba-0b1889bd4f26!
	I0816 22:24:06.921969       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-20210816221555-6487_f3ee5552-f11a-48aa-86ba-0b1889bd4f26!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210816221555-6487 -n no-preload-20210816221555-6487
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210816221555-6487 -n no-preload-20210816221555-6487: exit status 2 (340.82987ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context no-preload-20210816221555-6487 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-7c784ccb57-b466w
helpers_test.go:273: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context no-preload-20210816221555-6487 describe pod metrics-server-7c784ccb57-b466w
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context no-preload-20210816221555-6487 describe pod metrics-server-7c784ccb57-b466w: exit status 1 (60.223475ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-7c784ccb57-b466w" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context no-preload-20210816221555-6487 describe pod metrics-server-7c784ccb57-b466w: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/Pause (5.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (115.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20210816222436-6487 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-20210816222436-6487 --alsologtostderr -v=1: exit status 80 (1.824623003s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-20210816222436-6487 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 22:26:12.448628  267097 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:26:12.448723  267097 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:26:12.448733  267097 out.go:311] Setting ErrFile to fd 2...
	I0816 22:26:12.448744  267097 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:26:12.448865  267097 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:26:12.449044  267097 out.go:305] Setting JSON to false
	I0816 22:26:12.449067  267097 mustload.go:65] Loading cluster: newest-cni-20210816222436-6487
	I0816 22:26:12.449390  267097 config.go:177] Loaded profile config "newest-cni-20210816222436-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0816 22:26:12.449790  267097 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:12.490169  267097 host.go:66] Checking if "newest-cni-20210816222436-6487" exists ...
	I0816 22:26:12.490922  267097 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-20210816222436-6487 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0816 22:26:12.493336  267097 out.go:177] * Pausing node newest-cni-20210816222436-6487 ... 
	I0816 22:26:12.493378  267097 host.go:66] Checking if "newest-cni-20210816222436-6487" exists ...
	I0816 22:26:12.493679  267097 ssh_runner.go:149] Run: systemctl --version
	I0816 22:26:12.493722  267097 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:26:12.534134  267097 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:26:12.627729  267097 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:26:12.636480  267097 pause.go:50] kubelet running: true
	I0816 22:26:12.636536  267097 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0816 22:26:12.762231  267097 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0816 22:26:12.762316  267097 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0816 22:26:12.831302  267097 cri.go:76] found id: "f53cda0af472e0179ac68516d7550b20c474108c95d22e22223b7dfa9eda7a15"
	I0816 22:26:12.831330  267097 cri.go:76] found id: "753269be7f7c38f2867ce6b87e47c113544230521c2c9398d83b19782ced1605"
	I0816 22:26:12.831334  267097 cri.go:76] found id: "13ea21137eecc1b006615bde88ce68cb748b6081ec476b11a4f1c42824c05e82"
	I0816 22:26:12.831339  267097 cri.go:76] found id: "fe20235c8dfabaca60c374e04c6ca0505ac6982ff808879f0cf5404494b46e7a"
	I0816 22:26:12.831342  267097 cri.go:76] found id: "3240bbb3d92273f37b404871473ddb237cebd4ed0410db9db3d7f10a4f130c9d"
	I0816 22:26:12.831347  267097 cri.go:76] found id: "58841932c26d9bee71f69ef725fd6e6bebfca6f808151590730af393a22c4366"
	I0816 22:26:12.831350  267097 cri.go:76] found id: "57e924dea59145634016260af8f63a36e510063a313daaa5f72b235d19db5bbc"
	I0816 22:26:12.831353  267097 cri.go:76] found id: ""
	I0816 22:26:12.831393  267097 ssh_runner.go:149] Run: sudo runc list -f json
	I0816 22:26:12.869183  267097 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"13ea21137eecc1b006615bde88ce68cb748b6081ec476b11a4f1c42824c05e82","pid":1515,"status":"running","bundle":"/run/containers/storage/overlay-containers/13ea21137eecc1b006615bde88ce68cb748b6081ec476b11a4f1c42824c05e82/userdata","rootfs":"/var/lib/containers/storage/overlay/aea9ae68a9adcb066af659886645d2b8350382c901967b1aa39ae331ca641173/merged","created":"2021-08-16T22:26:09.904206624Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"c37ba2b3","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"c37ba2b3\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminatio
nMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"13ea21137eecc1b006615bde88ce68cb748b6081ec476b11a4f1c42824c05e82","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:26:09.780029767Z","io.kubernetes.cri-o.Image":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20210326-1e038dc5","io.kubernetes.cri-o.ImageRef":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-4wtm6\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"f784c344-70ae-41f8-b749-4bd3d26179d1\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-4wtm6_f784c344-70ae-41f8-b749-4bd3d26179d1/kindnet-cni/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\",\"attempt\":1}","io.kubernetes.c
ri-o.MountPoint":"/var/lib/containers/storage/overlay/aea9ae68a9adcb066af659886645d2b8350382c901967b1aa39ae331ca641173/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-4wtm6_kube-system_f784c344-70ae-41f8-b749-4bd3d26179d1_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-4wtm6_kube-system_f784c344-70ae-41f8-b749-4bd3d26179d1_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\"
:\"/var/lib/kubelet/pods/f784c344-70ae-41f8-b749-4bd3d26179d1/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/f784c344-70ae-41f8-b749-4bd3d26179d1/containers/kindnet-cni/a8cebd53\",\"readonly\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/f784c344-70ae-41f8-b749-4bd3d26179d1/volumes/kubernetes.io~projected/kube-api-access-g9qbx\",\"readonly\":true}]","io.kubernetes.pod.name":"kindnet-4wtm6","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"f784c344-70ae-41f8-b749-4bd3d26179d1","kubernetes.io/config.seen":"2021-08-16T22:26:08.718108109Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.
0.2-dev","id":"23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da","pid":1072,"status":"running","bundle":"/run/containers/storage/overlay-containers/23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da/userdata","rootfs":"/var/lib/containers/storage/overlay/a38a2bb704a1b920ea1ab985e4a6a7308dd3fd9f60a69e55b1e065487c3af817/merged","created":"2021-08-16T22:26:04.364243046Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"df7780f9bad91c25553392d513174b4b\",\"kubernetes.io/config.seen\":\"2021-08-16T22:26:03.722798713Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-newest-cni-20210816222436-6487_kube-system_df7780f9bad91c25553392d513174b4b_0","io.kubernetes.cri-o.C
ontainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:26:04.270132319Z","io.kubernetes.cri-o.HostName":"newest-cni-20210816222436-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-scheduler-newest-cni-20210816222436-6487","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-newest-cni-20210816222436-6487\",\"tier\":\"control-plane\",\"component\":\"kube-scheduler\",\"io.kubernetes.pod.uid\":\"df7780f9bad91c25553392d513174b4b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-newest-cni-20210816222436-6487_df7780f9bad91c25553392d513174b4b/23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da.log","io.kuber
netes.cri-o.Metadata":"{\"name\":\"kube-scheduler-newest-cni-20210816222436-6487\",\"uid\":\"df7780f9bad91c25553392d513174b4b\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a38a2bb704a1b920ea1ab985e4a6a7308dd3fd9f60a69e55b1e065487c3af817/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-newest-cni-20210816222436-6487_kube-system_df7780f9bad91c25553392d513174b4b_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/r
un/containers/storage/overlay-containers/23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-newest-cni-20210816222436-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"df7780f9bad91c25553392d513174b4b","kubernetes.io/config.hash":"df7780f9bad91c25553392d513174b4b","kubernetes.io/config.seen":"2021-08-16T22:26:03.722798713Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce","pid":1481,"status":"running","bundle":"/run/containers/storage/overlay-containers/2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce/userdata","rootfs":"/var/lib/containers/storage/overlay/e26acaf4721f16f078ffc0beb74894b4d71053689f976543d4cae2303fd83111/merged","created":"2021-08-16T22:26:09.724304555Z","annotations":{"app":"kindnet","controller-revision
-hash":"694b6fb659","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-16T22:26:08.718108109Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce","io.kubernetes.cri-o.ContainerName":"k8s_POD_kindnet-4wtm6_kube-system_f784c344-70ae-41f8-b749-4bd3d26179d1_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:26:09.63467994Z","io.kubernetes.cri-o.HostName":"newest-cni-20210816222436-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kindnet-4wtm6","io.kubernetes.cri-o.Labels":"{\"k8s-app\":\"kindnet\",\"io.kuberne
tes.pod.uid\":\"f784c344-70ae-41f8-b749-4bd3d26179d1\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"app\":\"kindnet\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.name\":\"kindnet-4wtm6\",\"tier\":\"node\",\"pod-template-generation\":\"1\",\"controller-revision-hash\":\"694b6fb659\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-4wtm6_f784c344-70ae-41f8-b749-4bd3d26179d1/2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-4wtm6\",\"uid\":\"f784c344-70ae-41f8-b749-4bd3d26179d1\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e26acaf4721f16f078ffc0beb74894b4d71053689f976543d4cae2303fd83111/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-4wtm6_kube-system_f784c344-70ae-41f8-b749-4bd3d26179d1_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.
kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce/userdata/shm","io.kubernetes.pod.name":"kindnet-4wtm6","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"f784c344-70ae-41f8-b749-4bd3d26179d1","k8s-app":"kindnet","kubernetes.io/config.seen":"2021-08-16T22:26:08.718108109Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1","tier":"node"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3240bbb3d92273f37b404871473ddb237cebd4ed0410db9db3
d7f10a4f130c9d","pid":1218,"status":"running","bundle":"/run/containers/storage/overlay-containers/3240bbb3d92273f37b404871473ddb237cebd4ed0410db9db3d7f10a4f130c9d/userdata","rootfs":"/var/lib/containers/storage/overlay/7aa9a1d38ac4f3b510295c350ae27eb4a35352eef8f0b0a7952aa8ff013aa71c/merged","created":"2021-08-16T22:26:04.804152394Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"6d77abef","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"6d77abef\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3240bbb3d92273f37b404871473ddb237
cebd4ed0410db9db3d7f10a4f130c9d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:26:04.535773486Z","io.kubernetes.cri-o.Image":"0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.5.0-0","io.kubernetes.cri-o.ImageRef":"0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-newest-cni-20210816222436-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"2536aaf528548213b1dcf04092b35557\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-newest-cni-20210816222436-6487_2536aaf528548213b1dcf04092b35557/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7aa9a1d38ac4f3b510295c350ae27eb4a35352eef8f0b0a7952aa8ff013aa71c/merged","io.kubernetes.cri-o.Name":"
k8s_etcd_etcd-newest-cni-20210816222436-6487_kube-system_2536aaf528548213b1dcf04092b35557_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29","io.kubernetes.cri-o.SandboxName":"k8s_etcd-newest-cni-20210816222436-6487_kube-system_2536aaf528548213b1dcf04092b35557_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/2536aaf528548213b1dcf04092b35557/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/2536aaf528548213b1dcf04092b35557/containers/etcd/8fcee24a\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etc
d\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-newest-cni-20210816222436-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"2536aaf528548213b1dcf04092b35557","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"2536aaf528548213b1dcf04092b35557","kubernetes.io/config.seen":"2021-08-16T22:26:03.722832993Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29","pid":1082,"status":"running","bundle":"/run/containers/storage/overlay-containers/413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29/userd
ata","rootfs":"/var/lib/containers/storage/overlay/b4cf57233241c8bf7cd570e54f2f425610e53ac9da4fb9fd8c5ff1b568648354/merged","created":"2021-08-16T22:26:04.412512583Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.67.2:2379\",\"kubernetes.io/config.seen\":\"2021-08-16T22:26:03.722832993Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"2536aaf528548213b1dcf04092b35557\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-newest-cni-20210816222436-6487_kube-system_2536aaf528548213b1dcf04092b35557_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:26:04.277085618Z","io.kubernetes.cri-o.HostName":"newest-cni-20210816222436-6487","io.kubernetes.cri-o
.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"etcd-newest-cni-20210816222436-6487","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"2536aaf528548213b1dcf04092b35557\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-newest-cni-20210816222436-6487\",\"tier\":\"control-plane\",\"component\":\"etcd\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-newest-cni-20210816222436-6487_2536aaf528548213b1dcf04092b35557/413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-newest-cni-20210816222436-6487\",\"uid\":\"2536aaf528548213b1dcf04092b35557\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage
/overlay/b4cf57233241c8bf7cd570e54f2f425610e53ac9da4fb9fd8c5ff1b568648354/merged","io.kubernetes.cri-o.Name":"k8s_etcd-newest-cni-20210816222436-6487_kube-system_2536aaf528548213b1dcf04092b35557_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29/userdata/shm","io.kubernetes.pod.name":"etcd-newest-cni-20210816222436-6487","io.kubernetes.pod.namespace":"kube-system","io.
kubernetes.pod.uid":"2536aaf528548213b1dcf04092b35557","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"2536aaf528548213b1dcf04092b35557","kubernetes.io/config.seen":"2021-08-16T22:26:03.722832993Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5","pid":1770,"status":"running","bundle":"/run/containers/storage/overlay-containers/4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5/userdata","rootfs":"/var/lib/containers/storage/overlay/612d1a4d86c516d2620f304018c3f4d674ebff3ef5757f10a3ee932bfa4a3c37/merged","created":"2021-08-16T22:26:10.652301837Z","annotations":{"addonmanager.kubernetes.io/mode":"Reconcile","integration-test":"storage-provisioner","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubec
tl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"addonmanager.kubernetes.io/mode\\\":\\\"Reconcile\\\",\\\"integration-test\\\":\\\"storage-provisioner\\\"},\\\"name\\\":\\\"storage-provisioner\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/storage-provisioner\\\"],\\\"image\\\":\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"storage-provisioner\\\",\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"storage-provisioner\\\",\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/tmp\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"tmp\\\"}]}}\\n\",\"kubernetes.io/config.seen\":\"2021-08-16T22:26:08.718106458Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","i
o.kubernetes.cri-o.ContainerID":"4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5","io.kubernetes.cri-o.ContainerName":"k8s_POD_storage-provisioner_kube-system_a71ed147-1a32-4360-9bcc-722db25ff42e_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:26:10.533021059Z","io.kubernetes.cri-o.HostName":"newest-cni-20210816222436-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"storage-provisioner","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"io.kubernetes.pod.uid\":\"a71ed147-1a32-4360-9bcc-722db25ff42e\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"integration-test\":\"storage-provis
ioner\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_a71ed147-1a32-4360-9bcc-722db25ff42e/4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"uid\":\"a71ed147-1a32-4360-9bcc-722db25ff42e\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/612d1a4d86c516d2620f304018c3f4d674ebff3ef5757f10a3ee932bfa4a3c37/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_kube-system_a71ed147-1a32-4360-9bcc-722db25ff42e_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":
"4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5/userdata/shm","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"a71ed147-1a32-4360-9bcc-722db25ff42e","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountN
ame\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-16T22:26:08.718106458Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"57e924dea59145634016260af8f63a36e510063a313daaa5f72b235d19db5bbc","pid":1206,"status":"running","bundle":"/run/containers/storage/overlay-containers/57e924dea59145634016260af8f63a36e510063a313daaa5f72b235d19db5bbc/userdata","rootfs":"/var/lib/containers/storage/overlay/24af443ea04d39903cb89840f0daad118f56c2b0d326c32cb7a1befcf629e122/merged","created":"2021-08-16T22:26:04.71199704Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"a0decd21","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","i
o.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"a0decd21\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"57e924dea59145634016260af8f63a36e510063a313daaa5f72b235d19db5bbc","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:26:04.526726259Z","io.kubernetes.cri-o.Image":"7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.22.0-rc.0","io.kubernetes.cri-o.ImageRef":"7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-newest-cni-20210816222436-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod
.uid\":\"df7780f9bad91c25553392d513174b4b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-newest-cni-20210816222436-6487_df7780f9bad91c25553392d513174b4b/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/24af443ea04d39903cb89840f0daad118f56c2b0d326c32cb7a1befcf629e122/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-newest-cni-20210816222436-6487_kube-system_df7780f9bad91c25553392d513174b4b_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-newest-cni-20210816222436-6487_kube-system_df7780f9bad91c25553392d513174b4b_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.
kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/df7780f9bad91c25553392d513174b4b/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/df7780f9bad91c25553392d513174b4b/containers/kube-scheduler/31ce0ac3\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-newest-cni-20210816222436-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"df7780f9bad91c25553392d513174b4b","kubernetes.io/config.hash":"df7780f9bad91c25553392d513174b4b","kubernetes.io/config.seen":"2021-08-16T22:26:03.722798713Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.sy
stemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"58841932c26d9bee71f69ef725fd6e6bebfca6f808151590730af393a22c4366","pid":1227,"status":"running","bundle":"/run/containers/storage/overlay-containers/58841932c26d9bee71f69ef725fd6e6bebfca6f808151590730af393a22c4366/userdata","rootfs":"/var/lib/containers/storage/overlay/a44d492b65605a87f4e088a7f6991fcffde1501941f048ac11394abb7c2fbe91/merged","created":"2021-08-16T22:26:04.804141197Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ae03f07e","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ae03f07e\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.
container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"58841932c26d9bee71f69ef725fd6e6bebfca6f808151590730af393a22c4366","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:26:04.551211853Z","io.kubernetes.cri-o.Image":"cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0","io.kubernetes.cri-o.ImageRef":"cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-newest-cni-20210816222436-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"9f6bead6a8413b6ae4e9a9523f4c96b1\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-20210816222436-6487_9f6bead6a8413b6ae4e9a9523f4c96b1/k
ube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a44d492b65605a87f4e088a7f6991fcffde1501941f048ac11394abb7c2fbe91/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-newest-cni-20210816222436-6487_kube-system_9f6bead6a8413b6ae4e9a9523f4c96b1_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-newest-cni-20210816222436-6487_kube-system_9f6bead6a8413b6ae4e9a9523f4c96b1_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[
{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/9f6bead6a8413b6ae4e9a9523f4c96b1/containers/kube-controller-manager/26f0b290\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/9f6bead6a8413b6ae4e9a9523f4c96b1/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true},{\"container_path\
":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-newest-cni-20210816222436-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"9f6bead6a8413b6ae4e9a9523f4c96b1","kubernetes.io/config.hash":"9f6bead6a8413b6ae4e9a9523f4c96b1","kubernetes.io/config.seen":"2021-08-16T22:26:03.722839153Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"753269be7f7c38f2867ce6b87e47c113544230521c2c9398d83b19782ced1605","pid":1622,"status":"running","bundle":"/run/containers/storage/overlay-containers/753269be7f7c38f2867ce6b87e47c113544230521c2c9398d83b19782ced1605/userdata","rootfs":"/var/lib/containers/storage/overlay/d9e5f39d687c7c121427b55ffd0bc509a268a0786f7eee
5c93ff8b8b91806a23/merged","created":"2021-08-16T22:26:10.244251768Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d30aa744","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d30aa744\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"753269be7f7c38f2867ce6b87e47c113544230521c2c9398d83b19782ced1605","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:26:10.131225619Z","io.kubernetes.cri-o.Image":"ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c","io.kubernetes.cri-o.
ImageName":"k8s.gcr.io/kube-proxy:v1.22.0-rc.0","io.kubernetes.cri-o.ImageRef":"ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-242br\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"91a06e4b-7a8f-4f7c-a698-3f40c4024f1f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-242br_91a06e4b-7a8f-4f7c-a698-3f40c4024f1f/kube-proxy/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d9e5f39d687c7c121427b55ffd0bc509a268a0786f7eee5c93ff8b8b91806a23/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-242br_kube-system_91a06e4b-7a8f-4f7c-a698-3f40c4024f1f_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3/userdata/resolv.conf","io.kubernete
s.cri-o.SandboxID":"ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-242br_kube-system_91a06e4b-7a8f-4f7c-a698-3f40c4024f1f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/91a06e4b-7a8f-4f7c-a698-3f40c4024f1f/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/91a06e4b-7a8f-4f7c-a698-3f40c4024f1f/containers/kube-proxy/ae979876\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/91a06e4b-7a8f-4f7c-a698-3f40c4024f1f/volumes/kubernetes.io~con
figmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/91a06e4b-7a8f-4f7c-a698-3f40c4024f1f/volumes/kubernetes.io~projected/kube-api-access-sg56l\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-242br","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"91a06e4b-7a8f-4f7c-a698-3f40c4024f1f","kubernetes.io/config.seen":"2021-08-16T22:26:08.718074299Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c","pid":1083,"status":"running","bundle":"/run/containers/storage/overlay-containers/a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c/userdata","rootfs":"/var/lib/containers/storage/overlay/154c9093df65af69c4c4210fe6dc4572d
722a83b6d9fad70610c5b98eb8c155a/merged","created":"2021-08-16T22:26:04.432266214Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"9f6bead6a8413b6ae4e9a9523f4c96b1\",\"kubernetes.io/config.seen\":\"2021-08-16T22:26:03.722839153Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-newest-cni-20210816222436-6487_kube-system_9f6bead6a8413b6ae4e9a9523f4c96b1_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:26:04.293348506Z","io.kubernetes.cri-o.HostName":"newest-cni-20210816222436-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/a82deb4862d7a61a41e613bbfb34b
6cbde98f8b09bf5647d38ecdcf4611e2c8c/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-controller-manager-newest-cni-20210816222436-6487","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"9f6bead6a8413b6ae4e9a9523f4c96b1\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-newest-cni-20210816222436-6487\",\"io.kubernetes.container.name\":\"POD\",\"tier\":\"control-plane\",\"component\":\"kube-controller-manager\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-20210816222436-6487_9f6bead6a8413b6ae4e9a9523f4c96b1/a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-newest-cni-20210816222436-6487\",\"uid\":\"9f6bead6a8413b6ae4e9a9523f4c96b1\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/154c9093df65af69c4c4210fe
6dc4572d722a83b6d9fad70610c5b98eb8c155a/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-newest-cni-20210816222436-6487_kube-system_9f6bead6a8413b6ae4e9a9523f4c96b1_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-newest-cni-20210816222436-6487","io.kubernetes.pod.namespace":"kube-system",
"io.kubernetes.pod.uid":"9f6bead6a8413b6ae4e9a9523f4c96b1","kubernetes.io/config.hash":"9f6bead6a8413b6ae4e9a9523f4c96b1","kubernetes.io/config.seen":"2021-08-16T22:26:03.722839153Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a","pid":1081,"status":"running","bundle":"/run/containers/storage/overlay-containers/cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a/userdata","rootfs":"/var/lib/containers/storage/overlay/a09c3af809e59d5eea859de7553676f11c898692ebb43d5175cca9256f298c63/merged","created":"2021-08-16T22:26:04.396202176Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-16T22:26:03.722836100Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"63
5a1391ce04ca4800e0ff652a9e51f1\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.67.2:8443\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-newest-cni-20210816222436-6487_kube-system_635a1391ce04ca4800e0ff652a9e51f1_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:26:04.290562079Z","io.kubernetes.cri-o.HostName":"newest-cni-20210816222436-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-apiserver-newest-cni-20210816222436-6487","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-apiserver-newest-cni-20210816222436-6487\",\"ti
er\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.pod.uid\":\"635a1391ce04ca4800e0ff652a9e51f1\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-newest-cni-20210816222436-6487_635a1391ce04ca4800e0ff652a9e51f1/cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-newest-cni-20210816222436-6487\",\"uid\":\"635a1391ce04ca4800e0ff652a9e51f1\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a09c3af809e59d5eea859de7553676f11c898692ebb43d5175cca9256f298c63/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-newest-cni-20210816222436-6487_kube-system_635a1391ce04ca4800e0ff652a9e51f1_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cr
i-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-newest-cni-20210816222436-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"635a1391ce04ca4800e0ff652a9e51f1","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"635a1391ce04ca4800e0ff652a9e51f1","kubernetes.io/config.seen":"2021-08-16T22:26:03.722836100Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'
","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f53cda0af472e0179ac68516d7550b20c474108c95d22e22223b7dfa9eda7a15","pid":1835,"status":"running","bundle":"/run/containers/storage/overlay-containers/f53cda0af472e0179ac68516d7550b20c474108c95d22e22223b7dfa9eda7a15/userdata","rootfs":"/var/lib/containers/storage/overlay/1fb784df33c89ad75d836672da4c29d797c171f633cb1c7ab33208e134343e35/merged","created":"2021-08-16T22:26:10.900186218Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"77587bea","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"77587bea\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessageP
olicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f53cda0af472e0179ac68516d7550b20c474108c95d22e22223b7dfa9eda7a15","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:26:10.747131876Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a71ed147-1a32-4360-9bcc-722db25ff42e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_a71ed147-1a32-4360-9bcc-722db25ff42e/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.ku
bernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1fb784df33c89ad75d836672da4c29d797c171f633cb1c7ab33208e134343e35/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_a71ed147-1a32-4360-9bcc-722db25ff42e_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_a71ed147-1a32-4360-9bcc-722db25ff42e_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a71ed147-1a32-4360-9bcc-722db25ff42e/etc-hosts\",\"readonl
y\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a71ed147-1a32-4360-9bcc-722db25ff42e/containers/storage-provisioner/aaf67dd6\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/a71ed147-1a32-4360-9bcc-722db25ff42e/volumes/kubernetes.io~projected/kube-api-access-rl77x\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a71ed147-1a32-4360-9bcc-722db25ff42e","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-prov
isioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-16T22:26:08.718106458Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fe20235c8dfabaca60c374e04c6ca0505ac6982ff808879f0cf5404494b46e7a","pid":1226,"status":"running","bundle":"/run/containers/storage/overlay-containers/fe20235c8dfabaca60c374e04c6ca0505ac6982ff808879f0cf5404494b46e7a/userdata","rootfs":"/var/lib/containers/storage/overlay/8cd4f2b84af69108bc39b1c9abf4e4441c8587cd8b1d187db0f1865bffa7d948/merged","created":"2021-08-16T22:26:04.804155588Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.containe
r.hash":"1388e005","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"1388e005\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"fe20235c8dfabaca60c374e04c6ca0505ac6982ff808879f0cf5404494b46e7a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:26:04.540281506Z","io.kubernetes.cri-o.Image":"b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.22.0-rc.0","io.kubernetes.cri-o.ImageRef":"b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd29153
3dfe7426ffa2a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-newest-cni-20210816222436-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"635a1391ce04ca4800e0ff652a9e51f1\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-newest-cni-20210816222436-6487_635a1391ce04ca4800e0ff652a9e51f1/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/8cd4f2b84af69108bc39b1c9abf4e4441c8587cd8b1d187db0f1865bffa7d948/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-newest-cni-20210816222436-6487_kube-system_635a1391ce04ca4800e0ff652a9e51f1_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"cef5c6e15441436acdce
061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-newest-cni-20210816222436-6487_kube-system_635a1391ce04ca4800e0ff652a9e51f1_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/635a1391ce04ca4800e0ff652a9e51f1/containers/kube-apiserver/98ab4167\",\"readonly\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/635a1391ce04ca4800e0ff652a9e51f1/etc-hosts\",\"readonly\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/v
ar/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-newest-cni-20210816222436-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"635a1391ce04ca4800e0ff652a9e51f1","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"635a1391ce04ca4800e0ff652a9e51f1","kubernetes.io/config.seen":"2021-08-16T22:26:03.722836100Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3","pid":1559,"status":"running","bundle":"/run/containers/storage/overlay-containers/ff5a70388e7696343dc3c4be49
6d8892155d5bcdc0d096465b5316c5621bc2c3/userdata","rootfs":"/var/lib/containers/storage/overlay/6d52423cd038ec0f4e4982749012f79a10bd3700c63c1c2d02192fa77dbf64c3/merged","created":"2021-08-16T22:26:10.016259937Z","annotations":{"controller-revision-hash":"5cb9855ccb","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2021-08-16T22:26:08.718074299Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-242br_kube-system_91a06e4b-7a8f-4f7c-a698-3f40c4024f1f_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:26:09.932075675Z","io.kubernetes.cri-o.HostName":"newest-cni-20210816222436-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/f
f5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-proxy-242br","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-proxy-242br\",\"io.kubernetes.container.name\":\"POD\",\"controller-revision-hash\":\"5cb9855ccb\",\"pod-template-generation\":\"1\",\"k8s-app\":\"kube-proxy\",\"io.kubernetes.pod.uid\":\"91a06e4b-7a8f-4f7c-a698-3f40c4024f1f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-242br_91a06e4b-7a8f-4f7c-a698-3f40c4024f1f/ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-242br\",\"uid\":\"91a06e4b-7a8f-4f7c-a698-3f40c4024f1f\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6d52423cd038ec0f4e4982749012f79a10bd3700c63c1c2d02192fa77dbf64c3/merged","io.kubernetes.cri-o.Name":"
k8s_kube-proxy-242br_kube-system_91a06e4b-7a8f-4f7c-a698-3f40c4024f1f_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3/userdata/shm","io.kubernetes.pod.name":"kube-proxy-242br","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"91a06e4b-7a8f-4f7c-a698-3f40c4024f1f","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-08-16T22:26:08.718074299Z","
kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"}]
	I0816 22:26:12.869856  267097 cri.go:113] list returned 14 containers
	I0816 22:26:12.869870  267097 cri.go:116] container: {ID:13ea21137eecc1b006615bde88ce68cb748b6081ec476b11a4f1c42824c05e82 Status:running}
	I0816 22:26:12.869879  267097 cri.go:116] container: {ID:23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da Status:running}
	I0816 22:26:12.869883  267097 cri.go:118] skipping 23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da - not in ps
	I0816 22:26:12.869889  267097 cri.go:116] container: {ID:2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce Status:running}
	I0816 22:26:12.869900  267097 cri.go:118] skipping 2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce - not in ps
	I0816 22:26:12.869904  267097 cri.go:116] container: {ID:3240bbb3d92273f37b404871473ddb237cebd4ed0410db9db3d7f10a4f130c9d Status:running}
	I0816 22:26:12.869908  267097 cri.go:116] container: {ID:413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29 Status:running}
	I0816 22:26:12.869912  267097 cri.go:118] skipping 413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29 - not in ps
	I0816 22:26:12.869916  267097 cri.go:116] container: {ID:4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5 Status:running}
	I0816 22:26:12.869920  267097 cri.go:118] skipping 4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5 - not in ps
	I0816 22:26:12.869923  267097 cri.go:116] container: {ID:57e924dea59145634016260af8f63a36e510063a313daaa5f72b235d19db5bbc Status:running}
	I0816 22:26:12.869928  267097 cri.go:116] container: {ID:58841932c26d9bee71f69ef725fd6e6bebfca6f808151590730af393a22c4366 Status:running}
	I0816 22:26:12.869938  267097 cri.go:116] container: {ID:753269be7f7c38f2867ce6b87e47c113544230521c2c9398d83b19782ced1605 Status:running}
	I0816 22:26:12.869946  267097 cri.go:116] container: {ID:a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c Status:running}
	I0816 22:26:12.869950  267097 cri.go:118] skipping a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c - not in ps
	I0816 22:26:12.869954  267097 cri.go:116] container: {ID:cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a Status:running}
	I0816 22:26:12.869958  267097 cri.go:118] skipping cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a - not in ps
	I0816 22:26:12.869961  267097 cri.go:116] container: {ID:f53cda0af472e0179ac68516d7550b20c474108c95d22e22223b7dfa9eda7a15 Status:running}
	I0816 22:26:12.869965  267097 cri.go:116] container: {ID:fe20235c8dfabaca60c374e04c6ca0505ac6982ff808879f0cf5404494b46e7a Status:running}
	I0816 22:26:12.869972  267097 cri.go:116] container: {ID:ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3 Status:running}
	I0816 22:26:12.869975  267097 cri.go:118] skipping ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3 - not in ps
	I0816 22:26:12.870009  267097 ssh_runner.go:149] Run: sudo runc pause 13ea21137eecc1b006615bde88ce68cb748b6081ec476b11a4f1c42824c05e82
	I0816 22:26:12.884370  267097 ssh_runner.go:149] Run: sudo runc pause 13ea21137eecc1b006615bde88ce68cb748b6081ec476b11a4f1c42824c05e82 3240bbb3d92273f37b404871473ddb237cebd4ed0410db9db3d7f10a4f130c9d
	I0816 22:26:12.896581  267097 retry.go:31] will retry after 276.165072ms: runc: sudo runc pause 13ea21137eecc1b006615bde88ce68cb748b6081ec476b11a4f1c42824c05e82 3240bbb3d92273f37b404871473ddb237cebd4ed0410db9db3d7f10a4f130c9d: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:26:12Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0816 22:26:13.173031  267097 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:26:13.183078  267097 pause.go:50] kubelet running: false
	I0816 22:26:13.183134  267097 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0816 22:26:13.288898  267097 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0816 22:26:13.288973  267097 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0816 22:26:13.358606  267097 cri.go:76] found id: "f53cda0af472e0179ac68516d7550b20c474108c95d22e22223b7dfa9eda7a15"
	I0816 22:26:13.358629  267097 cri.go:76] found id: "753269be7f7c38f2867ce6b87e47c113544230521c2c9398d83b19782ced1605"
	I0816 22:26:13.358636  267097 cri.go:76] found id: "13ea21137eecc1b006615bde88ce68cb748b6081ec476b11a4f1c42824c05e82"
	I0816 22:26:13.358641  267097 cri.go:76] found id: "fe20235c8dfabaca60c374e04c6ca0505ac6982ff808879f0cf5404494b46e7a"
	I0816 22:26:13.358647  267097 cri.go:76] found id: "3240bbb3d92273f37b404871473ddb237cebd4ed0410db9db3d7f10a4f130c9d"
	I0816 22:26:13.358652  267097 cri.go:76] found id: "58841932c26d9bee71f69ef725fd6e6bebfca6f808151590730af393a22c4366"
	I0816 22:26:13.358657  267097 cri.go:76] found id: "57e924dea59145634016260af8f63a36e510063a313daaa5f72b235d19db5bbc"
	I0816 22:26:13.358662  267097 cri.go:76] found id: ""
	I0816 22:26:13.358699  267097 ssh_runner.go:149] Run: sudo runc list -f json
	I0816 22:26:13.397667  267097 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"13ea21137eecc1b006615bde88ce68cb748b6081ec476b11a4f1c42824c05e82","pid":1515,"status":"paused","bundle":"/run/containers/storage/overlay-containers/13ea21137eecc1b006615bde88ce68cb748b6081ec476b11a4f1c42824c05e82/userdata","rootfs":"/var/lib/containers/storage/overlay/aea9ae68a9adcb066af659886645d2b8350382c901967b1aa39ae331ca641173/merged","created":"2021-08-16T22:26:09.904206624Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"c37ba2b3","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"c37ba2b3\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termination
MessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"13ea21137eecc1b006615bde88ce68cb748b6081ec476b11a4f1c42824c05e82","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:26:09.780029767Z","io.kubernetes.cri-o.Image":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20210326-1e038dc5","io.kubernetes.cri-o.ImageRef":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-4wtm6\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"f784c344-70ae-41f8-b749-4bd3d26179d1\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-4wtm6_f784c344-70ae-41f8-b749-4bd3d26179d1/kindnet-cni/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\",\"attempt\":1}","io.kubernetes.cr
i-o.MountPoint":"/var/lib/containers/storage/overlay/aea9ae68a9adcb066af659886645d2b8350382c901967b1aa39ae331ca641173/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-4wtm6_kube-system_f784c344-70ae-41f8-b749-4bd3d26179d1_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-4wtm6_kube-system_f784c344-70ae-41f8-b749-4bd3d26179d1_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":
\"/var/lib/kubelet/pods/f784c344-70ae-41f8-b749-4bd3d26179d1/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/f784c344-70ae-41f8-b749-4bd3d26179d1/containers/kindnet-cni/a8cebd53\",\"readonly\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/f784c344-70ae-41f8-b749-4bd3d26179d1/volumes/kubernetes.io~projected/kube-api-access-g9qbx\",\"readonly\":true}]","io.kubernetes.pod.name":"kindnet-4wtm6","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"f784c344-70ae-41f8-b749-4bd3d26179d1","kubernetes.io/config.seen":"2021-08-16T22:26:08.718108109Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0
.2-dev","id":"23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da","pid":1072,"status":"running","bundle":"/run/containers/storage/overlay-containers/23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da/userdata","rootfs":"/var/lib/containers/storage/overlay/a38a2bb704a1b920ea1ab985e4a6a7308dd3fd9f60a69e55b1e065487c3af817/merged","created":"2021-08-16T22:26:04.364243046Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"df7780f9bad91c25553392d513174b4b\",\"kubernetes.io/config.seen\":\"2021-08-16T22:26:03.722798713Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-newest-cni-20210816222436-6487_kube-system_df7780f9bad91c25553392d513174b4b_0","io.kubernetes.cri-o.Co
ntainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:26:04.270132319Z","io.kubernetes.cri-o.HostName":"newest-cni-20210816222436-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-scheduler-newest-cni-20210816222436-6487","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-newest-cni-20210816222436-6487\",\"tier\":\"control-plane\",\"component\":\"kube-scheduler\",\"io.kubernetes.pod.uid\":\"df7780f9bad91c25553392d513174b4b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-newest-cni-20210816222436-6487_df7780f9bad91c25553392d513174b4b/23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da.log","io.kubern
etes.cri-o.Metadata":"{\"name\":\"kube-scheduler-newest-cni-20210816222436-6487\",\"uid\":\"df7780f9bad91c25553392d513174b4b\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a38a2bb704a1b920ea1ab985e4a6a7308dd3fd9f60a69e55b1e065487c3af817/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-newest-cni-20210816222436-6487_kube-system_df7780f9bad91c25553392d513174b4b_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/ru
n/containers/storage/overlay-containers/23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-newest-cni-20210816222436-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"df7780f9bad91c25553392d513174b4b","kubernetes.io/config.hash":"df7780f9bad91c25553392d513174b4b","kubernetes.io/config.seen":"2021-08-16T22:26:03.722798713Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce","pid":1481,"status":"running","bundle":"/run/containers/storage/overlay-containers/2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce/userdata","rootfs":"/var/lib/containers/storage/overlay/e26acaf4721f16f078ffc0beb74894b4d71053689f976543d4cae2303fd83111/merged","created":"2021-08-16T22:26:09.724304555Z","annotations":{"app":"kindnet","controller-revision-
hash":"694b6fb659","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-16T22:26:08.718108109Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce","io.kubernetes.cri-o.ContainerName":"k8s_POD_kindnet-4wtm6_kube-system_f784c344-70ae-41f8-b749-4bd3d26179d1_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:26:09.63467994Z","io.kubernetes.cri-o.HostName":"newest-cni-20210816222436-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kindnet-4wtm6","io.kubernetes.cri-o.Labels":"{\"k8s-app\":\"kindnet\",\"io.kubernet
es.pod.uid\":\"f784c344-70ae-41f8-b749-4bd3d26179d1\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"app\":\"kindnet\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.name\":\"kindnet-4wtm6\",\"tier\":\"node\",\"pod-template-generation\":\"1\",\"controller-revision-hash\":\"694b6fb659\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-4wtm6_f784c344-70ae-41f8-b749-4bd3d26179d1/2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-4wtm6\",\"uid\":\"f784c344-70ae-41f8-b749-4bd3d26179d1\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e26acaf4721f16f078ffc0beb74894b4d71053689f976543d4cae2303fd83111/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-4wtm6_kube-system_f784c344-70ae-41f8-b749-4bd3d26179d1_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.k
ubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce/userdata/shm","io.kubernetes.pod.name":"kindnet-4wtm6","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"f784c344-70ae-41f8-b749-4bd3d26179d1","k8s-app":"kindnet","kubernetes.io/config.seen":"2021-08-16T22:26:08.718108109Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1","tier":"node"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3240bbb3d92273f37b404871473ddb237cebd4ed0410db9db3d
7f10a4f130c9d","pid":1218,"status":"running","bundle":"/run/containers/storage/overlay-containers/3240bbb3d92273f37b404871473ddb237cebd4ed0410db9db3d7f10a4f130c9d/userdata","rootfs":"/var/lib/containers/storage/overlay/7aa9a1d38ac4f3b510295c350ae27eb4a35352eef8f0b0a7952aa8ff013aa71c/merged","created":"2021-08-16T22:26:04.804152394Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"6d77abef","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"6d77abef\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3240bbb3d92273f37b404871473ddb237c
ebd4ed0410db9db3d7f10a4f130c9d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:26:04.535773486Z","io.kubernetes.cri-o.Image":"0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.5.0-0","io.kubernetes.cri-o.ImageRef":"0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-newest-cni-20210816222436-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"2536aaf528548213b1dcf04092b35557\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-newest-cni-20210816222436-6487_2536aaf528548213b1dcf04092b35557/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7aa9a1d38ac4f3b510295c350ae27eb4a35352eef8f0b0a7952aa8ff013aa71c/merged","io.kubernetes.cri-o.Name":"k
8s_etcd_etcd-newest-cni-20210816222436-6487_kube-system_2536aaf528548213b1dcf04092b35557_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29","io.kubernetes.cri-o.SandboxName":"k8s_etcd-newest-cni-20210816222436-6487_kube-system_2536aaf528548213b1dcf04092b35557_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/2536aaf528548213b1dcf04092b35557/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/2536aaf528548213b1dcf04092b35557/containers/etcd/8fcee24a\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd
\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-newest-cni-20210816222436-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"2536aaf528548213b1dcf04092b35557","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"2536aaf528548213b1dcf04092b35557","kubernetes.io/config.seen":"2021-08-16T22:26:03.722832993Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29","pid":1082,"status":"running","bundle":"/run/containers/storage/overlay-containers/413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29/userda
ta","rootfs":"/var/lib/containers/storage/overlay/b4cf57233241c8bf7cd570e54f2f425610e53ac9da4fb9fd8c5ff1b568648354/merged","created":"2021-08-16T22:26:04.412512583Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.67.2:2379\",\"kubernetes.io/config.seen\":\"2021-08-16T22:26:03.722832993Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"2536aaf528548213b1dcf04092b35557\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-newest-cni-20210816222436-6487_kube-system_2536aaf528548213b1dcf04092b35557_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:26:04.277085618Z","io.kubernetes.cri-o.HostName":"newest-cni-20210816222436-6487","io.kubernetes.cri-o.
HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"etcd-newest-cni-20210816222436-6487","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"2536aaf528548213b1dcf04092b35557\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-newest-cni-20210816222436-6487\",\"tier\":\"control-plane\",\"component\":\"etcd\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-newest-cni-20210816222436-6487_2536aaf528548213b1dcf04092b35557/413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-newest-cni-20210816222436-6487\",\"uid\":\"2536aaf528548213b1dcf04092b35557\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/
overlay/b4cf57233241c8bf7cd570e54f2f425610e53ac9da4fb9fd8c5ff1b568648354/merged","io.kubernetes.cri-o.Name":"k8s_etcd-newest-cni-20210816222436-6487_kube-system_2536aaf528548213b1dcf04092b35557_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29/userdata/shm","io.kubernetes.pod.name":"etcd-newest-cni-20210816222436-6487","io.kubernetes.pod.namespace":"kube-system","io.k
ubernetes.pod.uid":"2536aaf528548213b1dcf04092b35557","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"2536aaf528548213b1dcf04092b35557","kubernetes.io/config.seen":"2021-08-16T22:26:03.722832993Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5","pid":1770,"status":"running","bundle":"/run/containers/storage/overlay-containers/4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5/userdata","rootfs":"/var/lib/containers/storage/overlay/612d1a4d86c516d2620f304018c3f4d674ebff3ef5757f10a3ee932bfa4a3c37/merged","created":"2021-08-16T22:26:10.652301837Z","annotations":{"addonmanager.kubernetes.io/mode":"Reconcile","integration-test":"storage-provisioner","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubect
l.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"addonmanager.kubernetes.io/mode\\\":\\\"Reconcile\\\",\\\"integration-test\\\":\\\"storage-provisioner\\\"},\\\"name\\\":\\\"storage-provisioner\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/storage-provisioner\\\"],\\\"image\\\":\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"storage-provisioner\\\",\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"storage-provisioner\\\",\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/tmp\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"tmp\\\"}]}}\\n\",\"kubernetes.io/config.seen\":\"2021-08-16T22:26:08.718106458Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io
.kubernetes.cri-o.ContainerID":"4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5","io.kubernetes.cri-o.ContainerName":"k8s_POD_storage-provisioner_kube-system_a71ed147-1a32-4360-9bcc-722db25ff42e_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:26:10.533021059Z","io.kubernetes.cri-o.HostName":"newest-cni-20210816222436-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"storage-provisioner","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"io.kubernetes.pod.uid\":\"a71ed147-1a32-4360-9bcc-722db25ff42e\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"integration-test\":\"storage-provisi
oner\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_a71ed147-1a32-4360-9bcc-722db25ff42e/4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"uid\":\"a71ed147-1a32-4360-9bcc-722db25ff42e\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/612d1a4d86c516d2620f304018c3f4d674ebff3ef5757f10a3ee932bfa4a3c37/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_kube-system_a71ed147-1a32-4360-9bcc-722db25ff42e_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"
4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5/userdata/shm","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"a71ed147-1a32-4360-9bcc-722db25ff42e","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountNa
me\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-16T22:26:08.718106458Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"57e924dea59145634016260af8f63a36e510063a313daaa5f72b235d19db5bbc","pid":1206,"status":"running","bundle":"/run/containers/storage/overlay-containers/57e924dea59145634016260af8f63a36e510063a313daaa5f72b235d19db5bbc/userdata","rootfs":"/var/lib/containers/storage/overlay/24af443ea04d39903cb89840f0daad118f56c2b0d326c32cb7a1befcf629e122/merged","created":"2021-08-16T22:26:04.71199704Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"a0decd21","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io
.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"a0decd21\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"57e924dea59145634016260af8f63a36e510063a313daaa5f72b235d19db5bbc","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:26:04.526726259Z","io.kubernetes.cri-o.Image":"7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.22.0-rc.0","io.kubernetes.cri-o.ImageRef":"7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-newest-cni-20210816222436-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.
uid\":\"df7780f9bad91c25553392d513174b4b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-newest-cni-20210816222436-6487_df7780f9bad91c25553392d513174b4b/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/24af443ea04d39903cb89840f0daad118f56c2b0d326c32cb7a1befcf629e122/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-newest-cni-20210816222436-6487_kube-system_df7780f9bad91c25553392d513174b4b_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-newest-cni-20210816222436-6487_kube-system_df7780f9bad91c25553392d513174b4b_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.k
ubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/df7780f9bad91c25553392d513174b4b/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/df7780f9bad91c25553392d513174b4b/containers/kube-scheduler/31ce0ac3\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-newest-cni-20210816222436-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"df7780f9bad91c25553392d513174b4b","kubernetes.io/config.hash":"df7780f9bad91c25553392d513174b4b","kubernetes.io/config.seen":"2021-08-16T22:26:03.722798713Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.sys
temd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"58841932c26d9bee71f69ef725fd6e6bebfca6f808151590730af393a22c4366","pid":1227,"status":"running","bundle":"/run/containers/storage/overlay-containers/58841932c26d9bee71f69ef725fd6e6bebfca6f808151590730af393a22c4366/userdata","rootfs":"/var/lib/containers/storage/overlay/a44d492b65605a87f4e088a7f6991fcffde1501941f048ac11394abb7c2fbe91/merged","created":"2021-08-16T22:26:04.804141197Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ae03f07e","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ae03f07e\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.c
ontainer.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"58841932c26d9bee71f69ef725fd6e6bebfca6f808151590730af393a22c4366","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:26:04.551211853Z","io.kubernetes.cri-o.Image":"cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0","io.kubernetes.cri-o.ImageRef":"cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-newest-cni-20210816222436-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"9f6bead6a8413b6ae4e9a9523f4c96b1\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-20210816222436-6487_9f6bead6a8413b6ae4e9a9523f4c96b1/ku
be-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a44d492b65605a87f4e088a7f6991fcffde1501941f048ac11394abb7c2fbe91/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-newest-cni-20210816222436-6487_kube-system_9f6bead6a8413b6ae4e9a9523f4c96b1_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-newest-cni-20210816222436-6487_kube-system_9f6bead6a8413b6ae4e9a9523f4c96b1_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{
\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/9f6bead6a8413b6ae4e9a9523f4c96b1/containers/kube-controller-manager/26f0b290\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/9f6bead6a8413b6ae4e9a9523f4c96b1/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true},{\"container_path\"
:\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-newest-cni-20210816222436-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"9f6bead6a8413b6ae4e9a9523f4c96b1","kubernetes.io/config.hash":"9f6bead6a8413b6ae4e9a9523f4c96b1","kubernetes.io/config.seen":"2021-08-16T22:26:03.722839153Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"753269be7f7c38f2867ce6b87e47c113544230521c2c9398d83b19782ced1605","pid":1622,"status":"running","bundle":"/run/containers/storage/overlay-containers/753269be7f7c38f2867ce6b87e47c113544230521c2c9398d83b19782ced1605/userdata","rootfs":"/var/lib/containers/storage/overlay/d9e5f39d687c7c121427b55ffd0bc509a268a0786f7eee5
c93ff8b8b91806a23/merged","created":"2021-08-16T22:26:10.244251768Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d30aa744","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d30aa744\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"753269be7f7c38f2867ce6b87e47c113544230521c2c9398d83b19782ced1605","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:26:10.131225619Z","io.kubernetes.cri-o.Image":"ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c","io.kubernetes.cri-o.I
mageName":"k8s.gcr.io/kube-proxy:v1.22.0-rc.0","io.kubernetes.cri-o.ImageRef":"ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-242br\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"91a06e4b-7a8f-4f7c-a698-3f40c4024f1f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-242br_91a06e4b-7a8f-4f7c-a698-3f40c4024f1f/kube-proxy/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d9e5f39d687c7c121427b55ffd0bc509a268a0786f7eee5c93ff8b8b91806a23/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-242br_kube-system_91a06e4b-7a8f-4f7c-a698-3f40c4024f1f_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3/userdata/resolv.conf","io.kubernetes
.cri-o.SandboxID":"ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-242br_kube-system_91a06e4b-7a8f-4f7c-a698-3f40c4024f1f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/91a06e4b-7a8f-4f7c-a698-3f40c4024f1f/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/91a06e4b-7a8f-4f7c-a698-3f40c4024f1f/containers/kube-proxy/ae979876\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/91a06e4b-7a8f-4f7c-a698-3f40c4024f1f/volumes/kubernetes.io~conf
igmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/91a06e4b-7a8f-4f7c-a698-3f40c4024f1f/volumes/kubernetes.io~projected/kube-api-access-sg56l\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-242br","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"91a06e4b-7a8f-4f7c-a698-3f40c4024f1f","kubernetes.io/config.seen":"2021-08-16T22:26:08.718074299Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c","pid":1083,"status":"running","bundle":"/run/containers/storage/overlay-containers/a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c/userdata","rootfs":"/var/lib/containers/storage/overlay/154c9093df65af69c4c4210fe6dc4572d7
22a83b6d9fad70610c5b98eb8c155a/merged","created":"2021-08-16T22:26:04.432266214Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"9f6bead6a8413b6ae4e9a9523f4c96b1\",\"kubernetes.io/config.seen\":\"2021-08-16T22:26:03.722839153Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-newest-cni-20210816222436-6487_kube-system_9f6bead6a8413b6ae4e9a9523f4c96b1_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:26:04.293348506Z","io.kubernetes.cri-o.HostName":"newest-cni-20210816222436-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/a82deb4862d7a61a41e613bbfb34b6
cbde98f8b09bf5647d38ecdcf4611e2c8c/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-controller-manager-newest-cni-20210816222436-6487","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"9f6bead6a8413b6ae4e9a9523f4c96b1\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-newest-cni-20210816222436-6487\",\"io.kubernetes.container.name\":\"POD\",\"tier\":\"control-plane\",\"component\":\"kube-controller-manager\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-20210816222436-6487_9f6bead6a8413b6ae4e9a9523f4c96b1/a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-newest-cni-20210816222436-6487\",\"uid\":\"9f6bead6a8413b6ae4e9a9523f4c96b1\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/154c9093df65af69c4c4210fe6
dc4572d722a83b6d9fad70610c5b98eb8c155a/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-newest-cni-20210816222436-6487_kube-system_9f6bead6a8413b6ae4e9a9523f4c96b1_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-newest-cni-20210816222436-6487","io.kubernetes.pod.namespace":"kube-system","
io.kubernetes.pod.uid":"9f6bead6a8413b6ae4e9a9523f4c96b1","kubernetes.io/config.hash":"9f6bead6a8413b6ae4e9a9523f4c96b1","kubernetes.io/config.seen":"2021-08-16T22:26:03.722839153Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a","pid":1081,"status":"running","bundle":"/run/containers/storage/overlay-containers/cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a/userdata","rootfs":"/var/lib/containers/storage/overlay/a09c3af809e59d5eea859de7553676f11c898692ebb43d5175cca9256f298c63/merged","created":"2021-08-16T22:26:04.396202176Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-16T22:26:03.722836100Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"635
a1391ce04ca4800e0ff652a9e51f1\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.67.2:8443\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-newest-cni-20210816222436-6487_kube-system_635a1391ce04ca4800e0ff652a9e51f1_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:26:04.290562079Z","io.kubernetes.cri-o.HostName":"newest-cni-20210816222436-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-apiserver-newest-cni-20210816222436-6487","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-apiserver-newest-cni-20210816222436-6487\",\"tie
r\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.pod.uid\":\"635a1391ce04ca4800e0ff652a9e51f1\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-newest-cni-20210816222436-6487_635a1391ce04ca4800e0ff652a9e51f1/cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-newest-cni-20210816222436-6487\",\"uid\":\"635a1391ce04ca4800e0ff652a9e51f1\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a09c3af809e59d5eea859de7553676f11c898692ebb43d5175cca9256f298c63/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-newest-cni-20210816222436-6487_kube-system_635a1391ce04ca4800e0ff652a9e51f1_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri
-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-newest-cni-20210816222436-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"635a1391ce04ca4800e0ff652a9e51f1","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"635a1391ce04ca4800e0ff652a9e51f1","kubernetes.io/config.seen":"2021-08-16T22:26:03.722836100Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'"
,"tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f53cda0af472e0179ac68516d7550b20c474108c95d22e22223b7dfa9eda7a15","pid":1835,"status":"running","bundle":"/run/containers/storage/overlay-containers/f53cda0af472e0179ac68516d7550b20c474108c95d22e22223b7dfa9eda7a15/userdata","rootfs":"/var/lib/containers/storage/overlay/1fb784df33c89ad75d836672da4c29d797c171f633cb1c7ab33208e134343e35/merged","created":"2021-08-16T22:26:10.900186218Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"77587bea","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"77587bea\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePo
licy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f53cda0af472e0179ac68516d7550b20c474108c95d22e22223b7dfa9eda7a15","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:26:10.747131876Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a71ed147-1a32-4360-9bcc-722db25ff42e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_a71ed147-1a32-4360-9bcc-722db25ff42e/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kub
ernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1fb784df33c89ad75d836672da4c29d797c171f633cb1c7ab33208e134343e35/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_a71ed147-1a32-4360-9bcc-722db25ff42e_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_a71ed147-1a32-4360-9bcc-722db25ff42e_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a71ed147-1a32-4360-9bcc-722db25ff42e/etc-hosts\",\"readonly
\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a71ed147-1a32-4360-9bcc-722db25ff42e/containers/storage-provisioner/aaf67dd6\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/a71ed147-1a32-4360-9bcc-722db25ff42e/volumes/kubernetes.io~projected/kube-api-access-rl77x\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a71ed147-1a32-4360-9bcc-722db25ff42e","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provi
sioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-16T22:26:08.718106458Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fe20235c8dfabaca60c374e04c6ca0505ac6982ff808879f0cf5404494b46e7a","pid":1226,"status":"running","bundle":"/run/containers/storage/overlay-containers/fe20235c8dfabaca60c374e04c6ca0505ac6982ff808879f0cf5404494b46e7a/userdata","rootfs":"/var/lib/containers/storage/overlay/8cd4f2b84af69108bc39b1c9abf4e4441c8587cd8b1d187db0f1865bffa7d948/merged","created":"2021-08-16T22:26:04.804155588Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container
.hash":"1388e005","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"1388e005\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"fe20235c8dfabaca60c374e04c6ca0505ac6982ff808879f0cf5404494b46e7a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:26:04.540281506Z","io.kubernetes.cri-o.Image":"b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.22.0-rc.0","io.kubernetes.cri-o.ImageRef":"b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533
dfe7426ffa2a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-newest-cni-20210816222436-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"635a1391ce04ca4800e0ff652a9e51f1\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-newest-cni-20210816222436-6487_635a1391ce04ca4800e0ff652a9e51f1/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/8cd4f2b84af69108bc39b1c9abf4e4441c8587cd8b1d187db0f1865bffa7d948/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-newest-cni-20210816222436-6487_kube-system_635a1391ce04ca4800e0ff652a9e51f1_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"cef5c6e15441436acdce0
61b395adf5ea783ba53d02a4517c3d569ecb66c2e1a","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-newest-cni-20210816222436-6487_kube-system_635a1391ce04ca4800e0ff652a9e51f1_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/635a1391ce04ca4800e0ff652a9e51f1/containers/kube-apiserver/98ab4167\",\"readonly\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/635a1391ce04ca4800e0ff652a9e51f1/etc-hosts\",\"readonly\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/va
r/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-newest-cni-20210816222436-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"635a1391ce04ca4800e0ff652a9e51f1","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"635a1391ce04ca4800e0ff652a9e51f1","kubernetes.io/config.seen":"2021-08-16T22:26:03.722836100Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3","pid":1559,"status":"running","bundle":"/run/containers/storage/overlay-containers/ff5a70388e7696343dc3c4be496
d8892155d5bcdc0d096465b5316c5621bc2c3/userdata","rootfs":"/var/lib/containers/storage/overlay/6d52423cd038ec0f4e4982749012f79a10bd3700c63c1c2d02192fa77dbf64c3/merged","created":"2021-08-16T22:26:10.016259937Z","annotations":{"controller-revision-hash":"5cb9855ccb","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2021-08-16T22:26:08.718074299Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-242br_kube-system_91a06e4b-7a8f-4f7c-a698-3f40c4024f1f_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:26:09.932075675Z","io.kubernetes.cri-o.HostName":"newest-cni-20210816222436-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/ff
5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-proxy-242br","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-proxy-242br\",\"io.kubernetes.container.name\":\"POD\",\"controller-revision-hash\":\"5cb9855ccb\",\"pod-template-generation\":\"1\",\"k8s-app\":\"kube-proxy\",\"io.kubernetes.pod.uid\":\"91a06e4b-7a8f-4f7c-a698-3f40c4024f1f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-242br_91a06e4b-7a8f-4f7c-a698-3f40c4024f1f/ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-242br\",\"uid\":\"91a06e4b-7a8f-4f7c-a698-3f40c4024f1f\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6d52423cd038ec0f4e4982749012f79a10bd3700c63c1c2d02192fa77dbf64c3/merged","io.kubernetes.cri-o.Name":"k
8s_kube-proxy-242br_kube-system_91a06e4b-7a8f-4f7c-a698-3f40c4024f1f_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3/userdata/shm","io.kubernetes.pod.name":"kube-proxy-242br","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"91a06e4b-7a8f-4f7c-a698-3f40c4024f1f","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-08-16T22:26:08.718074299Z","k
ubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"}]
	I0816 22:26:13.398287  267097 cri.go:113] list returned 14 containers
	I0816 22:26:13.398301  267097 cri.go:116] container: {ID:13ea21137eecc1b006615bde88ce68cb748b6081ec476b11a4f1c42824c05e82 Status:paused}
	I0816 22:26:13.398311  267097 cri.go:122] skipping {13ea21137eecc1b006615bde88ce68cb748b6081ec476b11a4f1c42824c05e82 paused}: state = "paused", want "running"
	I0816 22:26:13.398322  267097 cri.go:116] container: {ID:23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da Status:running}
	I0816 22:26:13.398326  267097 cri.go:118] skipping 23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da - not in ps
	I0816 22:26:13.398330  267097 cri.go:116] container: {ID:2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce Status:running}
	I0816 22:26:13.398335  267097 cri.go:118] skipping 2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce - not in ps
	I0816 22:26:13.398338  267097 cri.go:116] container: {ID:3240bbb3d92273f37b404871473ddb237cebd4ed0410db9db3d7f10a4f130c9d Status:running}
	I0816 22:26:13.398342  267097 cri.go:116] container: {ID:413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29 Status:running}
	I0816 22:26:13.398346  267097 cri.go:118] skipping 413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29 - not in ps
	I0816 22:26:13.398350  267097 cri.go:116] container: {ID:4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5 Status:running}
	I0816 22:26:13.398354  267097 cri.go:118] skipping 4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5 - not in ps
	I0816 22:26:13.398357  267097 cri.go:116] container: {ID:57e924dea59145634016260af8f63a36e510063a313daaa5f72b235d19db5bbc Status:running}
	I0816 22:26:13.398361  267097 cri.go:116] container: {ID:58841932c26d9bee71f69ef725fd6e6bebfca6f808151590730af393a22c4366 Status:running}
	I0816 22:26:13.398368  267097 cri.go:116] container: {ID:753269be7f7c38f2867ce6b87e47c113544230521c2c9398d83b19782ced1605 Status:running}
	I0816 22:26:13.398373  267097 cri.go:116] container: {ID:a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c Status:running}
	I0816 22:26:13.398378  267097 cri.go:118] skipping a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c - not in ps
	I0816 22:26:13.398381  267097 cri.go:116] container: {ID:cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a Status:running}
	I0816 22:26:13.398385  267097 cri.go:118] skipping cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a - not in ps
	I0816 22:26:13.398389  267097 cri.go:116] container: {ID:f53cda0af472e0179ac68516d7550b20c474108c95d22e22223b7dfa9eda7a15 Status:running}
	I0816 22:26:13.398393  267097 cri.go:116] container: {ID:fe20235c8dfabaca60c374e04c6ca0505ac6982ff808879f0cf5404494b46e7a Status:running}
	I0816 22:26:13.398400  267097 cri.go:116] container: {ID:ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3 Status:running}
	I0816 22:26:13.398404  267097 cri.go:118] skipping ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3 - not in ps
	I0816 22:26:13.398440  267097 ssh_runner.go:149] Run: sudo runc pause 3240bbb3d92273f37b404871473ddb237cebd4ed0410db9db3d7f10a4f130c9d
	I0816 22:26:13.413204  267097 ssh_runner.go:149] Run: sudo runc pause 3240bbb3d92273f37b404871473ddb237cebd4ed0410db9db3d7f10a4f130c9d 57e924dea59145634016260af8f63a36e510063a313daaa5f72b235d19db5bbc
	I0816 22:26:13.425796  267097 retry.go:31] will retry after 540.190908ms: runc: sudo runc pause 3240bbb3d92273f37b404871473ddb237cebd4ed0410db9db3d7f10a4f130c9d 57e924dea59145634016260af8f63a36e510063a313daaa5f72b235d19db5bbc: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:26:13Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0816 22:26:13.966509  267097 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:26:13.975958  267097 pause.go:50] kubelet running: false
	I0816 22:26:13.976013  267097 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0816 22:26:14.079072  267097 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0816 22:26:14.079150  267097 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0816 22:26:14.147568  267097 cri.go:76] found id: "f53cda0af472e0179ac68516d7550b20c474108c95d22e22223b7dfa9eda7a15"
	I0816 22:26:14.147590  267097 cri.go:76] found id: "753269be7f7c38f2867ce6b87e47c113544230521c2c9398d83b19782ced1605"
	I0816 22:26:14.147594  267097 cri.go:76] found id: "13ea21137eecc1b006615bde88ce68cb748b6081ec476b11a4f1c42824c05e82"
	I0816 22:26:14.147599  267097 cri.go:76] found id: "fe20235c8dfabaca60c374e04c6ca0505ac6982ff808879f0cf5404494b46e7a"
	I0816 22:26:14.147602  267097 cri.go:76] found id: "3240bbb3d92273f37b404871473ddb237cebd4ed0410db9db3d7f10a4f130c9d"
	I0816 22:26:14.147606  267097 cri.go:76] found id: "58841932c26d9bee71f69ef725fd6e6bebfca6f808151590730af393a22c4366"
	I0816 22:26:14.147610  267097 cri.go:76] found id: "57e924dea59145634016260af8f63a36e510063a313daaa5f72b235d19db5bbc"
	I0816 22:26:14.147613  267097 cri.go:76] found id: ""
	I0816 22:26:14.147646  267097 ssh_runner.go:149] Run: sudo runc list -f json
	I0816 22:26:14.184970  267097 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"13ea21137eecc1b006615bde88ce68cb748b6081ec476b11a4f1c42824c05e82","pid":1515,"status":"paused","bundle":"/run/containers/storage/overlay-containers/13ea21137eecc1b006615bde88ce68cb748b6081ec476b11a4f1c42824c05e82/userdata","rootfs":"/var/lib/containers/storage/overlay/aea9ae68a9adcb066af659886645d2b8350382c901967b1aa39ae331ca641173/merged","created":"2021-08-16T22:26:09.904206624Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"c37ba2b3","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"c37ba2b3\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termination
MessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"13ea21137eecc1b006615bde88ce68cb748b6081ec476b11a4f1c42824c05e82","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:26:09.780029767Z","io.kubernetes.cri-o.Image":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20210326-1e038dc5","io.kubernetes.cri-o.ImageRef":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-4wtm6\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"f784c344-70ae-41f8-b749-4bd3d26179d1\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-4wtm6_f784c344-70ae-41f8-b749-4bd3d26179d1/kindnet-cni/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\",\"attempt\":1}","io.kubernetes.cr
i-o.MountPoint":"/var/lib/containers/storage/overlay/aea9ae68a9adcb066af659886645d2b8350382c901967b1aa39ae331ca641173/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-4wtm6_kube-system_f784c344-70ae-41f8-b749-4bd3d26179d1_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-4wtm6_kube-system_f784c344-70ae-41f8-b749-4bd3d26179d1_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":
\"/var/lib/kubelet/pods/f784c344-70ae-41f8-b749-4bd3d26179d1/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/f784c344-70ae-41f8-b749-4bd3d26179d1/containers/kindnet-cni/a8cebd53\",\"readonly\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/f784c344-70ae-41f8-b749-4bd3d26179d1/volumes/kubernetes.io~projected/kube-api-access-g9qbx\",\"readonly\":true}]","io.kubernetes.pod.name":"kindnet-4wtm6","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"f784c344-70ae-41f8-b749-4bd3d26179d1","kubernetes.io/config.seen":"2021-08-16T22:26:08.718108109Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0
.2-dev","id":"23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da","pid":1072,"status":"running","bundle":"/run/containers/storage/overlay-containers/23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da/userdata","rootfs":"/var/lib/containers/storage/overlay/a38a2bb704a1b920ea1ab985e4a6a7308dd3fd9f60a69e55b1e065487c3af817/merged","created":"2021-08-16T22:26:04.364243046Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"df7780f9bad91c25553392d513174b4b\",\"kubernetes.io/config.seen\":\"2021-08-16T22:26:03.722798713Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-newest-cni-20210816222436-6487_kube-system_df7780f9bad91c25553392d513174b4b_0","io.kubernetes.cri-o.Co
ntainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:26:04.270132319Z","io.kubernetes.cri-o.HostName":"newest-cni-20210816222436-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-scheduler-newest-cni-20210816222436-6487","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-newest-cni-20210816222436-6487\",\"tier\":\"control-plane\",\"component\":\"kube-scheduler\",\"io.kubernetes.pod.uid\":\"df7780f9bad91c25553392d513174b4b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-newest-cni-20210816222436-6487_df7780f9bad91c25553392d513174b4b/23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da.log","io.kubern
etes.cri-o.Metadata":"{\"name\":\"kube-scheduler-newest-cni-20210816222436-6487\",\"uid\":\"df7780f9bad91c25553392d513174b4b\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a38a2bb704a1b920ea1ab985e4a6a7308dd3fd9f60a69e55b1e065487c3af817/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-newest-cni-20210816222436-6487_kube-system_df7780f9bad91c25553392d513174b4b_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/ru
n/containers/storage/overlay-containers/23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-newest-cni-20210816222436-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"df7780f9bad91c25553392d513174b4b","kubernetes.io/config.hash":"df7780f9bad91c25553392d513174b4b","kubernetes.io/config.seen":"2021-08-16T22:26:03.722798713Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce","pid":1481,"status":"running","bundle":"/run/containers/storage/overlay-containers/2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce/userdata","rootfs":"/var/lib/containers/storage/overlay/e26acaf4721f16f078ffc0beb74894b4d71053689f976543d4cae2303fd83111/merged","created":"2021-08-16T22:26:09.724304555Z","annotations":{"app":"kindnet","controller-revision-
hash":"694b6fb659","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-16T22:26:08.718108109Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce","io.kubernetes.cri-o.ContainerName":"k8s_POD_kindnet-4wtm6_kube-system_f784c344-70ae-41f8-b749-4bd3d26179d1_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:26:09.63467994Z","io.kubernetes.cri-o.HostName":"newest-cni-20210816222436-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kindnet-4wtm6","io.kubernetes.cri-o.Labels":"{\"k8s-app\":\"kindnet\",\"io.kubernet
es.pod.uid\":\"f784c344-70ae-41f8-b749-4bd3d26179d1\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"app\":\"kindnet\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.name\":\"kindnet-4wtm6\",\"tier\":\"node\",\"pod-template-generation\":\"1\",\"controller-revision-hash\":\"694b6fb659\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-4wtm6_f784c344-70ae-41f8-b749-4bd3d26179d1/2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-4wtm6\",\"uid\":\"f784c344-70ae-41f8-b749-4bd3d26179d1\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e26acaf4721f16f078ffc0beb74894b4d71053689f976543d4cae2303fd83111/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-4wtm6_kube-system_f784c344-70ae-41f8-b749-4bd3d26179d1_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.k
ubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce/userdata/shm","io.kubernetes.pod.name":"kindnet-4wtm6","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"f784c344-70ae-41f8-b749-4bd3d26179d1","k8s-app":"kindnet","kubernetes.io/config.seen":"2021-08-16T22:26:08.718108109Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1","tier":"node"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3240bbb3d92273f37b404871473ddb237cebd4ed0410db9db3d
7f10a4f130c9d","pid":1218,"status":"paused","bundle":"/run/containers/storage/overlay-containers/3240bbb3d92273f37b404871473ddb237cebd4ed0410db9db3d7f10a4f130c9d/userdata","rootfs":"/var/lib/containers/storage/overlay/7aa9a1d38ac4f3b510295c350ae27eb4a35352eef8f0b0a7952aa8ff013aa71c/merged","created":"2021-08-16T22:26:04.804152394Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"6d77abef","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"6d77abef\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3240bbb3d92273f37b404871473ddb237ce
bd4ed0410db9db3d7f10a4f130c9d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:26:04.535773486Z","io.kubernetes.cri-o.Image":"0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.5.0-0","io.kubernetes.cri-o.ImageRef":"0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-newest-cni-20210816222436-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"2536aaf528548213b1dcf04092b35557\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-newest-cni-20210816222436-6487_2536aaf528548213b1dcf04092b35557/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7aa9a1d38ac4f3b510295c350ae27eb4a35352eef8f0b0a7952aa8ff013aa71c/merged","io.kubernetes.cri-o.Name":"k8
s_etcd_etcd-newest-cni-20210816222436-6487_kube-system_2536aaf528548213b1dcf04092b35557_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29","io.kubernetes.cri-o.SandboxName":"k8s_etcd-newest-cni-20210816222436-6487_kube-system_2536aaf528548213b1dcf04092b35557_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/2536aaf528548213b1dcf04092b35557/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/2536aaf528548213b1dcf04092b35557/containers/etcd/8fcee24a\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\
",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-newest-cni-20210816222436-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"2536aaf528548213b1dcf04092b35557","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"2536aaf528548213b1dcf04092b35557","kubernetes.io/config.seen":"2021-08-16T22:26:03.722832993Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29","pid":1082,"status":"running","bundle":"/run/containers/storage/overlay-containers/413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29/userdat
a","rootfs":"/var/lib/containers/storage/overlay/b4cf57233241c8bf7cd570e54f2f425610e53ac9da4fb9fd8c5ff1b568648354/merged","created":"2021-08-16T22:26:04.412512583Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.67.2:2379\",\"kubernetes.io/config.seen\":\"2021-08-16T22:26:03.722832993Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"2536aaf528548213b1dcf04092b35557\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-newest-cni-20210816222436-6487_kube-system_2536aaf528548213b1dcf04092b35557_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:26:04.277085618Z","io.kubernetes.cri-o.HostName":"newest-cni-20210816222436-6487","io.kubernetes.cri-o.H
ostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"etcd-newest-cni-20210816222436-6487","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"2536aaf528548213b1dcf04092b35557\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-newest-cni-20210816222436-6487\",\"tier\":\"control-plane\",\"component\":\"etcd\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-newest-cni-20210816222436-6487_2536aaf528548213b1dcf04092b35557/413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-newest-cni-20210816222436-6487\",\"uid\":\"2536aaf528548213b1dcf04092b35557\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/o
verlay/b4cf57233241c8bf7cd570e54f2f425610e53ac9da4fb9fd8c5ff1b568648354/merged","io.kubernetes.cri-o.Name":"k8s_etcd-newest-cni-20210816222436-6487_kube-system_2536aaf528548213b1dcf04092b35557_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29/userdata/shm","io.kubernetes.pod.name":"etcd-newest-cni-20210816222436-6487","io.kubernetes.pod.namespace":"kube-system","io.ku
bernetes.pod.uid":"2536aaf528548213b1dcf04092b35557","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"2536aaf528548213b1dcf04092b35557","kubernetes.io/config.seen":"2021-08-16T22:26:03.722832993Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5","pid":1770,"status":"running","bundle":"/run/containers/storage/overlay-containers/4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5/userdata","rootfs":"/var/lib/containers/storage/overlay/612d1a4d86c516d2620f304018c3f4d674ebff3ef5757f10a3ee932bfa4a3c37/merged","created":"2021-08-16T22:26:10.652301837Z","annotations":{"addonmanager.kubernetes.io/mode":"Reconcile","integration-test":"storage-provisioner","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubectl
.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"addonmanager.kubernetes.io/mode\\\":\\\"Reconcile\\\",\\\"integration-test\\\":\\\"storage-provisioner\\\"},\\\"name\\\":\\\"storage-provisioner\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/storage-provisioner\\\"],\\\"image\\\":\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"storage-provisioner\\\",\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"storage-provisioner\\\",\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/tmp\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"tmp\\\"}]}}\\n\",\"kubernetes.io/config.seen\":\"2021-08-16T22:26:08.718106458Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.
kubernetes.cri-o.ContainerID":"4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5","io.kubernetes.cri-o.ContainerName":"k8s_POD_storage-provisioner_kube-system_a71ed147-1a32-4360-9bcc-722db25ff42e_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:26:10.533021059Z","io.kubernetes.cri-o.HostName":"newest-cni-20210816222436-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"storage-provisioner","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"io.kubernetes.pod.uid\":\"a71ed147-1a32-4360-9bcc-722db25ff42e\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"integration-test\":\"storage-provisio
ner\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_a71ed147-1a32-4360-9bcc-722db25ff42e/4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"uid\":\"a71ed147-1a32-4360-9bcc-722db25ff42e\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/612d1a4d86c516d2620f304018c3f4d674ebff3ef5757f10a3ee932bfa4a3c37/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_kube-system_a71ed147-1a32-4360-9bcc-722db25ff42e_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"4
162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5/userdata/shm","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"a71ed147-1a32-4360-9bcc-722db25ff42e","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountNam
e\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-16T22:26:08.718106458Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"57e924dea59145634016260af8f63a36e510063a313daaa5f72b235d19db5bbc","pid":1206,"status":"running","bundle":"/run/containers/storage/overlay-containers/57e924dea59145634016260af8f63a36e510063a313daaa5f72b235d19db5bbc/userdata","rootfs":"/var/lib/containers/storage/overlay/24af443ea04d39903cb89840f0daad118f56c2b0d326c32cb7a1befcf629e122/merged","created":"2021-08-16T22:26:04.71199704Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"a0decd21","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.
kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"a0decd21\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"57e924dea59145634016260af8f63a36e510063a313daaa5f72b235d19db5bbc","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:26:04.526726259Z","io.kubernetes.cri-o.Image":"7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.22.0-rc.0","io.kubernetes.cri-o.ImageRef":"7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-newest-cni-20210816222436-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.u
id\":\"df7780f9bad91c25553392d513174b4b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-newest-cni-20210816222436-6487_df7780f9bad91c25553392d513174b4b/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/24af443ea04d39903cb89840f0daad118f56c2b0d326c32cb7a1befcf629e122/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-newest-cni-20210816222436-6487_kube-system_df7780f9bad91c25553392d513174b4b_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-newest-cni-20210816222436-6487_kube-system_df7780f9bad91c25553392d513174b4b_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.ku
bernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/df7780f9bad91c25553392d513174b4b/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/df7780f9bad91c25553392d513174b4b/containers/kube-scheduler/31ce0ac3\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-newest-cni-20210816222436-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"df7780f9bad91c25553392d513174b4b","kubernetes.io/config.hash":"df7780f9bad91c25553392d513174b4b","kubernetes.io/config.seen":"2021-08-16T22:26:03.722798713Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.syst
emd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"58841932c26d9bee71f69ef725fd6e6bebfca6f808151590730af393a22c4366","pid":1227,"status":"running","bundle":"/run/containers/storage/overlay-containers/58841932c26d9bee71f69ef725fd6e6bebfca6f808151590730af393a22c4366/userdata","rootfs":"/var/lib/containers/storage/overlay/a44d492b65605a87f4e088a7f6991fcffde1501941f048ac11394abb7c2fbe91/merged","created":"2021-08-16T22:26:04.804141197Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ae03f07e","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ae03f07e\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.co
ntainer.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"58841932c26d9bee71f69ef725fd6e6bebfca6f808151590730af393a22c4366","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:26:04.551211853Z","io.kubernetes.cri-o.Image":"cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0","io.kubernetes.cri-o.ImageRef":"cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-newest-cni-20210816222436-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"9f6bead6a8413b6ae4e9a9523f4c96b1\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-20210816222436-6487_9f6bead6a8413b6ae4e9a9523f4c96b1/kub
e-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a44d492b65605a87f4e088a7f6991fcffde1501941f048ac11394abb7c2fbe91/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-newest-cni-20210816222436-6487_kube-system_9f6bead6a8413b6ae4e9a9523f4c96b1_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-newest-cni-20210816222436-6487_kube-system_9f6bead6a8413b6ae4e9a9523f4c96b1_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\
"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/9f6bead6a8413b6ae4e9a9523f4c96b1/containers/kube-controller-manager/26f0b290\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/9f6bead6a8413b6ae4e9a9523f4c96b1/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true},{\"container_path\":
\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-newest-cni-20210816222436-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"9f6bead6a8413b6ae4e9a9523f4c96b1","kubernetes.io/config.hash":"9f6bead6a8413b6ae4e9a9523f4c96b1","kubernetes.io/config.seen":"2021-08-16T22:26:03.722839153Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"753269be7f7c38f2867ce6b87e47c113544230521c2c9398d83b19782ced1605","pid":1622,"status":"running","bundle":"/run/containers/storage/overlay-containers/753269be7f7c38f2867ce6b87e47c113544230521c2c9398d83b19782ced1605/userdata","rootfs":"/var/lib/containers/storage/overlay/d9e5f39d687c7c121427b55ffd0bc509a268a0786f7eee5c
93ff8b8b91806a23/merged","created":"2021-08-16T22:26:10.244251768Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d30aa744","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d30aa744\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"753269be7f7c38f2867ce6b87e47c113544230521c2c9398d83b19782ced1605","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:26:10.131225619Z","io.kubernetes.cri-o.Image":"ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c","io.kubernetes.cri-o.Im
ageName":"k8s.gcr.io/kube-proxy:v1.22.0-rc.0","io.kubernetes.cri-o.ImageRef":"ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-242br\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"91a06e4b-7a8f-4f7c-a698-3f40c4024f1f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-242br_91a06e4b-7a8f-4f7c-a698-3f40c4024f1f/kube-proxy/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d9e5f39d687c7c121427b55ffd0bc509a268a0786f7eee5c93ff8b8b91806a23/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-242br_kube-system_91a06e4b-7a8f-4f7c-a698-3f40c4024f1f_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3/userdata/resolv.conf","io.kubernetes.
cri-o.SandboxID":"ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-242br_kube-system_91a06e4b-7a8f-4f7c-a698-3f40c4024f1f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/91a06e4b-7a8f-4f7c-a698-3f40c4024f1f/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/91a06e4b-7a8f-4f7c-a698-3f40c4024f1f/containers/kube-proxy/ae979876\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/91a06e4b-7a8f-4f7c-a698-3f40c4024f1f/volumes/kubernetes.io~confi
gmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/91a06e4b-7a8f-4f7c-a698-3f40c4024f1f/volumes/kubernetes.io~projected/kube-api-access-sg56l\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-242br","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"91a06e4b-7a8f-4f7c-a698-3f40c4024f1f","kubernetes.io/config.seen":"2021-08-16T22:26:08.718074299Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c","pid":1083,"status":"running","bundle":"/run/containers/storage/overlay-containers/a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c/userdata","rootfs":"/var/lib/containers/storage/overlay/154c9093df65af69c4c4210fe6dc4572d72
2a83b6d9fad70610c5b98eb8c155a/merged","created":"2021-08-16T22:26:04.432266214Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"9f6bead6a8413b6ae4e9a9523f4c96b1\",\"kubernetes.io/config.seen\":\"2021-08-16T22:26:03.722839153Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-newest-cni-20210816222436-6487_kube-system_9f6bead6a8413b6ae4e9a9523f4c96b1_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:26:04.293348506Z","io.kubernetes.cri-o.HostName":"newest-cni-20210816222436-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/a82deb4862d7a61a41e613bbfb34b6c
bde98f8b09bf5647d38ecdcf4611e2c8c/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-controller-manager-newest-cni-20210816222436-6487","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"9f6bead6a8413b6ae4e9a9523f4c96b1\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-newest-cni-20210816222436-6487\",\"io.kubernetes.container.name\":\"POD\",\"tier\":\"control-plane\",\"component\":\"kube-controller-manager\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-20210816222436-6487_9f6bead6a8413b6ae4e9a9523f4c96b1/a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-newest-cni-20210816222436-6487\",\"uid\":\"9f6bead6a8413b6ae4e9a9523f4c96b1\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/154c9093df65af69c4c4210fe6d
c4572d722a83b6d9fad70610c5b98eb8c155a/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-newest-cni-20210816222436-6487_kube-system_9f6bead6a8413b6ae4e9a9523f4c96b1_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-newest-cni-20210816222436-6487","io.kubernetes.pod.namespace":"kube-system","i
o.kubernetes.pod.uid":"9f6bead6a8413b6ae4e9a9523f4c96b1","kubernetes.io/config.hash":"9f6bead6a8413b6ae4e9a9523f4c96b1","kubernetes.io/config.seen":"2021-08-16T22:26:03.722839153Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a","pid":1081,"status":"running","bundle":"/run/containers/storage/overlay-containers/cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a/userdata","rootfs":"/var/lib/containers/storage/overlay/a09c3af809e59d5eea859de7553676f11c898692ebb43d5175cca9256f298c63/merged","created":"2021-08-16T22:26:04.396202176Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-16T22:26:03.722836100Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"635a
1391ce04ca4800e0ff652a9e51f1\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.67.2:8443\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-newest-cni-20210816222436-6487_kube-system_635a1391ce04ca4800e0ff652a9e51f1_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:26:04.290562079Z","io.kubernetes.cri-o.HostName":"newest-cni-20210816222436-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-apiserver-newest-cni-20210816222436-6487","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-apiserver-newest-cni-20210816222436-6487\",\"tier
\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.pod.uid\":\"635a1391ce04ca4800e0ff652a9e51f1\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-newest-cni-20210816222436-6487_635a1391ce04ca4800e0ff652a9e51f1/cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-newest-cni-20210816222436-6487\",\"uid\":\"635a1391ce04ca4800e0ff652a9e51f1\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a09c3af809e59d5eea859de7553676f11c898692ebb43d5175cca9256f298c63/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-newest-cni-20210816222436-6487_kube-system_635a1391ce04ca4800e0ff652a9e51f1_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-
o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-newest-cni-20210816222436-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"635a1391ce04ca4800e0ff652a9e51f1","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"635a1391ce04ca4800e0ff652a9e51f1","kubernetes.io/config.seen":"2021-08-16T22:26:03.722836100Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'",
"tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f53cda0af472e0179ac68516d7550b20c474108c95d22e22223b7dfa9eda7a15","pid":1835,"status":"running","bundle":"/run/containers/storage/overlay-containers/f53cda0af472e0179ac68516d7550b20c474108c95d22e22223b7dfa9eda7a15/userdata","rootfs":"/var/lib/containers/storage/overlay/1fb784df33c89ad75d836672da4c29d797c171f633cb1c7ab33208e134343e35/merged","created":"2021-08-16T22:26:10.900186218Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"77587bea","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"77587bea\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePol
icy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f53cda0af472e0179ac68516d7550b20c474108c95d22e22223b7dfa9eda7a15","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:26:10.747131876Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a71ed147-1a32-4360-9bcc-722db25ff42e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_a71ed147-1a32-4360-9bcc-722db25ff42e/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kube
rnetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1fb784df33c89ad75d836672da4c29d797c171f633cb1c7ab33208e134343e35/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_a71ed147-1a32-4360-9bcc-722db25ff42e_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_a71ed147-1a32-4360-9bcc-722db25ff42e_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a71ed147-1a32-4360-9bcc-722db25ff42e/etc-hosts\",\"readonly\
":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a71ed147-1a32-4360-9bcc-722db25ff42e/containers/storage-provisioner/aaf67dd6\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/a71ed147-1a32-4360-9bcc-722db25ff42e/volumes/kubernetes.io~projected/kube-api-access-rl77x\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a71ed147-1a32-4360-9bcc-722db25ff42e","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provis
ioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-16T22:26:08.718106458Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fe20235c8dfabaca60c374e04c6ca0505ac6982ff808879f0cf5404494b46e7a","pid":1226,"status":"running","bundle":"/run/containers/storage/overlay-containers/fe20235c8dfabaca60c374e04c6ca0505ac6982ff808879f0cf5404494b46e7a/userdata","rootfs":"/var/lib/containers/storage/overlay/8cd4f2b84af69108bc39b1c9abf4e4441c8587cd8b1d187db0f1865bffa7d948/merged","created":"2021-08-16T22:26:04.804155588Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.
hash":"1388e005","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"1388e005\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"fe20235c8dfabaca60c374e04c6ca0505ac6982ff808879f0cf5404494b46e7a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-16T22:26:04.540281506Z","io.kubernetes.cri-o.Image":"b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.22.0-rc.0","io.kubernetes.cri-o.ImageRef":"b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533d
fe7426ffa2a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-newest-cni-20210816222436-6487\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"635a1391ce04ca4800e0ff652a9e51f1\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-newest-cni-20210816222436-6487_635a1391ce04ca4800e0ff652a9e51f1/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/8cd4f2b84af69108bc39b1c9abf4e4441c8587cd8b1d187db0f1865bffa7d948/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-newest-cni-20210816222436-6487_kube-system_635a1391ce04ca4800e0ff652a9e51f1_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"cef5c6e15441436acdce06
1b395adf5ea783ba53d02a4517c3d569ecb66c2e1a","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-newest-cni-20210816222436-6487_kube-system_635a1391ce04ca4800e0ff652a9e51f1_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/635a1391ce04ca4800e0ff652a9e51f1/containers/kube-apiserver/98ab4167\",\"readonly\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/635a1391ce04ca4800e0ff652a9e51f1/etc-hosts\",\"readonly\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var
/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-newest-cni-20210816222436-6487","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"635a1391ce04ca4800e0ff652a9e51f1","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"635a1391ce04ca4800e0ff652a9e51f1","kubernetes.io/config.seen":"2021-08-16T22:26:03.722836100Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3","pid":1559,"status":"running","bundle":"/run/containers/storage/overlay-containers/ff5a70388e7696343dc3c4be496d
8892155d5bcdc0d096465b5316c5621bc2c3/userdata","rootfs":"/var/lib/containers/storage/overlay/6d52423cd038ec0f4e4982749012f79a10bd3700c63c1c2d02192fa77dbf64c3/merged","created":"2021-08-16T22:26:10.016259937Z","annotations":{"controller-revision-hash":"5cb9855ccb","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2021-08-16T22:26:08.718074299Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-242br_kube-system_91a06e4b-7a8f-4f7c-a698-3f40c4024f1f_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-16T22:26:09.932075675Z","io.kubernetes.cri-o.HostName":"newest-cni-20210816222436-6487","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/ff5
a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-proxy-242br","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-proxy-242br\",\"io.kubernetes.container.name\":\"POD\",\"controller-revision-hash\":\"5cb9855ccb\",\"pod-template-generation\":\"1\",\"k8s-app\":\"kube-proxy\",\"io.kubernetes.pod.uid\":\"91a06e4b-7a8f-4f7c-a698-3f40c4024f1f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-242br_91a06e4b-7a8f-4f7c-a698-3f40c4024f1f/ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-242br\",\"uid\":\"91a06e4b-7a8f-4f7c-a698-3f40c4024f1f\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6d52423cd038ec0f4e4982749012f79a10bd3700c63c1c2d02192fa77dbf64c3/merged","io.kubernetes.cri-o.Name":"k8
s_kube-proxy-242br_kube-system_91a06e4b-7a8f-4f7c-a698-3f40c4024f1f_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3/userdata/shm","io.kubernetes.pod.name":"kube-proxy-242br","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"91a06e4b-7a8f-4f7c-a698-3f40c4024f1f","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-08-16T22:26:08.718074299Z","ku
bernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"}]
	I0816 22:26:14.185572  267097 cri.go:113] list returned 14 containers
	I0816 22:26:14.185584  267097 cri.go:116] container: {ID:13ea21137eecc1b006615bde88ce68cb748b6081ec476b11a4f1c42824c05e82 Status:paused}
	I0816 22:26:14.185594  267097 cri.go:122] skipping {13ea21137eecc1b006615bde88ce68cb748b6081ec476b11a4f1c42824c05e82 paused}: state = "paused", want "running"
	I0816 22:26:14.185611  267097 cri.go:116] container: {ID:23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da Status:running}
	I0816 22:26:14.185616  267097 cri.go:118] skipping 23128383f5cf74ba59d173f9cb126a576d1e186ab969f9bfd0fc220c8d7918da - not in ps
	I0816 22:26:14.185620  267097 cri.go:116] container: {ID:2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce Status:running}
	I0816 22:26:14.185624  267097 cri.go:118] skipping 2f59203e18012de03c0289b5ffb8fea3795a12ddb29d41c2b12ce260c7526fce - not in ps
	I0816 22:26:14.185630  267097 cri.go:116] container: {ID:3240bbb3d92273f37b404871473ddb237cebd4ed0410db9db3d7f10a4f130c9d Status:paused}
	I0816 22:26:14.185637  267097 cri.go:122] skipping {3240bbb3d92273f37b404871473ddb237cebd4ed0410db9db3d7f10a4f130c9d paused}: state = "paused", want "running"
	I0816 22:26:14.185641  267097 cri.go:116] container: {ID:413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29 Status:running}
	I0816 22:26:14.185645  267097 cri.go:118] skipping 413ba52501c760794bb0b6dadaf3f8c96cf5c44a9f9cab71e42b1f4c1705dd29 - not in ps
	I0816 22:26:14.185649  267097 cri.go:116] container: {ID:4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5 Status:running}
	I0816 22:26:14.185656  267097 cri.go:118] skipping 4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5 - not in ps
	I0816 22:26:14.185659  267097 cri.go:116] container: {ID:57e924dea59145634016260af8f63a36e510063a313daaa5f72b235d19db5bbc Status:running}
	I0816 22:26:14.185666  267097 cri.go:116] container: {ID:58841932c26d9bee71f69ef725fd6e6bebfca6f808151590730af393a22c4366 Status:running}
	I0816 22:26:14.185670  267097 cri.go:116] container: {ID:753269be7f7c38f2867ce6b87e47c113544230521c2c9398d83b19782ced1605 Status:running}
	I0816 22:26:14.185677  267097 cri.go:116] container: {ID:a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c Status:running}
	I0816 22:26:14.185682  267097 cri.go:118] skipping a82deb4862d7a61a41e613bbfb34b6cbde98f8b09bf5647d38ecdcf4611e2c8c - not in ps
	I0816 22:26:14.185685  267097 cri.go:116] container: {ID:cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a Status:running}
	I0816 22:26:14.185689  267097 cri.go:118] skipping cef5c6e15441436acdce061b395adf5ea783ba53d02a4517c3d569ecb66c2e1a - not in ps
	I0816 22:26:14.185693  267097 cri.go:116] container: {ID:f53cda0af472e0179ac68516d7550b20c474108c95d22e22223b7dfa9eda7a15 Status:running}
	I0816 22:26:14.185697  267097 cri.go:116] container: {ID:fe20235c8dfabaca60c374e04c6ca0505ac6982ff808879f0cf5404494b46e7a Status:running}
	I0816 22:26:14.185701  267097 cri.go:116] container: {ID:ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3 Status:running}
	I0816 22:26:14.185705  267097 cri.go:118] skipping ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3 - not in ps
	I0816 22:26:14.185738  267097 ssh_runner.go:149] Run: sudo runc pause 57e924dea59145634016260af8f63a36e510063a313daaa5f72b235d19db5bbc
	I0816 22:26:14.201034  267097 ssh_runner.go:149] Run: sudo runc pause 57e924dea59145634016260af8f63a36e510063a313daaa5f72b235d19db5bbc 58841932c26d9bee71f69ef725fd6e6bebfca6f808151590730af393a22c4366
	I0816 22:26:14.217525  267097 out.go:177] 
	W0816 22:26:14.217655  267097 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc pause 57e924dea59145634016260af8f63a36e510063a313daaa5f72b235d19db5bbc 58841932c26d9bee71f69ef725fd6e6bebfca6f808151590730af393a22c4366: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:26:14Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc pause 57e924dea59145634016260af8f63a36e510063a313daaa5f72b235d19db5bbc 58841932c26d9bee71f69ef725fd6e6bebfca6f808151590730af393a22c4366: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:26:14Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0816 22:26:14.217676  267097 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0816 22:26:14.220155  267097 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0816 22:26:14.221613  267097 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p newest-cni-20210816222436-6487 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect newest-cni-20210816222436-6487
helpers_test.go:236: (dbg) docker inspect newest-cni-20210816222436-6487:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7eb22a12bc4a4218f635298382b6c8aba5b495377c11b445856bcfc61c1280cc",
	        "Created": "2021-08-16T22:24:38.030106374Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 263235,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-16T22:25:48.579218939Z",
	            "FinishedAt": "2021-08-16T22:25:46.279450757Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/7eb22a12bc4a4218f635298382b6c8aba5b495377c11b445856bcfc61c1280cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7eb22a12bc4a4218f635298382b6c8aba5b495377c11b445856bcfc61c1280cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/7eb22a12bc4a4218f635298382b6c8aba5b495377c11b445856bcfc61c1280cc/hosts",
	        "LogPath": "/var/lib/docker/containers/7eb22a12bc4a4218f635298382b6c8aba5b495377c11b445856bcfc61c1280cc/7eb22a12bc4a4218f635298382b6c8aba5b495377c11b445856bcfc61c1280cc-json.log",
	        "Name": "/newest-cni-20210816222436-6487",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20210816222436-6487:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20210816222436-6487",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dcb7b9b30a6de5fdd887cc60352e20e42877b70743bd671357f119fd686884a4-init/diff:/var/lib/docker/overlay2/26ccb05b45c39eb8fb9a222c35182ae0b2281818311893cc1c820d93f21711ae/diff:/var/lib/docker/overlay2/cd717b1332cc33945d2b9d777346faa34d6bdfa47023daf36a4373532eccf421/diff:/var/lib/docker/overlay2/d9b45e6cc99a52e8f81dfe312587e9103ceac6c138634c90ec0d647cd90dcdc6/diff:/var/lib/docker/overlay2/253a269adcaa7871d92fb351ceb4d32421104cd62f741ba1c63d8e3f2e19a0b5/diff:/var/lib/docker/overlay2/42adedc2a706910fdba591e62cbfab57dd03d85b882f6eef38c3e131a0c52bea/diff:/var/lib/docker/overlay2/77ed2008192a16e91da5c6b9ccc6e71cea338a7cb24d331dc695e5f74fb04573/diff:/var/lib/docker/overlay2/29dccf868a296ce0889c929159fd3ac0dcdc00ba6e2ef864b056335954ffab4d/diff:/var/lib/docker/overlay2/592818f0a9ab8c99c2438562d9c5a479083cb08f41ad30ddefa9b493a8098150/diff:/var/lib/docker/overlay2/05fa44df096fbdb60f935bef69e952369e10cd8f1adc570c170c1f08d09743f7/diff:/var/lib/docker/overlay2/65cf48
1c471f231a663edc6331272317a851b0fd3395b2d6be43899f9b5443f9/diff:/var/lib/docker/overlay2/86a49202a7a2abea1a7a57c52f2e19cf991e9319b11dcf7f4105892413e0096e/diff:/var/lib/docker/overlay2/6f1a2a1805e90af512eef163f484935d57b0a0a72a65b5775c38efb5fe937d13/diff:/var/lib/docker/overlay2/f514eb992d78367550d2ce1c020771f984eff047302d6af3dcd71172270c0087/diff:/var/lib/docker/overlay2/cdae383a0302a2ed835985028caad813d6e16c7caf0e146005b47a4a1e94369b/diff:/var/lib/docker/overlay2/1fe02987c4f18db439e026431c44d1b8b233da766c4dcd331a4c20fd91e88748/diff:/var/lib/docker/overlay2/28bb6b828668bb682f3523f9f71fbd429ef6180cd98455625ea014ef4e532b77/diff:/var/lib/docker/overlay2/2878ae01f9dccb219097c2fc5873be677fe8a70e2bd936d25f018483a3d6c1a3/diff:/var/lib/docker/overlay2/da86efa9bad2d8b46c88440b1c4864add391f7cc9ee07d4227dd37d76c9ffd85/diff:/var/lib/docker/overlay2/93bf17ee1f2be70c70aa38b3aea9daa3a62e189181d1aa56acc52f5cf90f7436/diff:/var/lib/docker/overlay2/e8db603cd7ef789cc310bf7782375ee624d80ac0bceb3a3075900e67540c152a/diff:/var/lib/d
ocker/overlay2/1ba1c50d72480f49b835b34db6c3b21bccdf6530a8201631099a7e287dbf94a6/diff:/var/lib/docker/overlay2/2eeb0aecf508bc4a797d240b330bfa5b54b84a085737a96dad9faf9ea129d4ce/diff:/var/lib/docker/overlay2/40ba0dc970df76cce520350594f86384e121b47a190106559752e8f7cf138540/diff:/var/lib/docker/overlay2/8b80e2d0e53fb136919fd9ca7d4d198a92c324a19aa1827fcd672a3aa49a0dcc/diff:/var/lib/docker/overlay2/bd1d10c9a8876242ad61e3841c439177c3ae3a1967e5b053729107676134c622/diff:/var/lib/docker/overlay2/1f65ee5880c0ce6e027f2e72118cb956fcdcba9bb0806d98de6532bcbdfac6dd/diff:/var/lib/docker/overlay2/8bde131df5f26fa8ad65d4ddcb9ddb84de567427ffeb4aee76487489e530ef77/diff:/var/lib/docker/overlay2/73a68cda04f65298e5bd2581f0f16a8de96cd42145968d5c39b0c8ae3228a423/diff:/var/lib/docker/overlay2/0acfb650c0dfd3b9df005fe48f305b155d61d21914271c258e95bf42636e9bf8/diff:/var/lib/docker/overlay2/33e44b42ef88ffe285cb67e240248a629167dd1c83362d51a67679cbd866c30b/diff:/var/lib/docker/overlay2/c71ddf9e0475e4a3987bf3d616723e9d055a79c4bb6d92cf34fc67be2d9
c5134/diff:/var/lib/docker/overlay2/faece61d8cf45bab42cd499befcdb8d4548229311b35600cced068a3ce186025/diff:/var/lib/docker/overlay2/5e45db6ab7620f8baf25ff5ff01c806cd64575cdc89e5bb2965a25800093d5f9/diff:/var/lib/docker/overlay2/7324fe47bf39776cd61539f09f63fbb6f5e17aedc985bcc48bc12cdaaff4c4e4/diff:/var/lib/docker/overlay2/98a50fa9fa33654b2ed4ce8f7118f04d7004260519abca0d403b2ab337e9c273/diff:/var/lib/docker/overlay2/0e38707c109f3f1c54c7190021d525f898a86b07f2e51c86dc3e6d916eff3ed7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dcb7b9b30a6de5fdd887cc60352e20e42877b70743bd671357f119fd686884a4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dcb7b9b30a6de5fdd887cc60352e20e42877b70743bd671357f119fd686884a4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dcb7b9b30a6de5fdd887cc60352e20e42877b70743bd671357f119fd686884a4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20210816222436-6487",
	                "Source": "/var/lib/docker/volumes/newest-cni-20210816222436-6487/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20210816222436-6487",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20210816222436-6487",
	                "name.minikube.sigs.k8s.io": "newest-cni-20210816222436-6487",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2b84a8f9e1cc416eb4c4f9ff52b1d5e69a8c2f0440846f81250334b54f9ed210",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32969"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32968"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32965"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32967"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32966"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2b84a8f9e1cc",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20210816222436-6487": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7eb22a12bc4a"
	                    ],
	                    "NetworkID": "4afc94db92aaa17c48dae232abb8ef915eb989a2e4b5988b73079293b4e62510",
	                    "EndpointID": "bdf5a3354e5a9ddd36f3f991dcb8d14b7e34b2f581ee2d461103ff25a372dfa4",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210816222436-6487 -n newest-cni-20210816222436-6487
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210816222436-6487 -n newest-cni-20210816222436-6487: exit status 2 (17.315901017s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:26:31.597302  267424 status.go:422] Error apiserver status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-20210816222436-6487 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p newest-cni-20210816222436-6487 logs -n 25: exit status 110 (19.471654934s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | pause-20210816221349-6487 logs                             | pause-20210816221349-6487                      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:30 UTC | Mon, 16 Aug 2021 22:19:31 UTC |
	|         | -n 25                                                      |                                                |         |         |                               |                               |
	| delete  | -p pause-20210816221349-6487                               | pause-20210816221349-6487                      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:32 UTC | Mon, 16 Aug 2021 22:19:35 UTC |
	|         | --alsologtostderr -v=5                                     |                                                |         |         |                               |                               |
	| profile | list --output json                                         | minikube                                       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:35 UTC | Mon, 16 Aug 2021 22:19:38 UTC |
	| delete  | -p pause-20210816221349-6487                               | pause-20210816221349-6487                      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:38 UTC | Mon, 16 Aug 2021 22:19:38 UTC |
	| delete  | -p                                                         | disable-driver-mounts-20210816221938-6487      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:38 UTC | Mon, 16 Aug 2021 22:19:39 UTC |
	|         | disable-driver-mounts-20210816221938-6487                  |                                                |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:39 UTC | Mon, 16 Aug 2021 22:20:32 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:40 UTC | Mon, 16 Aug 2021 22:20:41 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:13 UTC | Mon, 16 Aug 2021 22:20:45 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:53 UTC | Mon, 16 Aug 2021 22:20:54 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:41 UTC | Mon, 16 Aug 2021 22:21:02 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:02 UTC | Mon, 16 Aug 2021 22:21:02 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| stop    | -p                                                         | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:54 UTC | Mon, 16 Aug 2021 22:21:15 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:15 UTC | Mon, 16 Aug 2021 22:21:15 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210816221555-6487                          | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:18:20 UTC | Mon, 16 Aug 2021 22:24:11 UTC |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:26 UTC | Mon, 16 Aug 2021 22:24:26 UTC |
	|         | no-preload-20210816221555-6487                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| -p      | no-preload-20210816221555-6487                             | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:29 UTC | Mon, 16 Aug 2021 22:24:29 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | no-preload-20210816221555-6487                             | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:30 UTC | Mon, 16 Aug 2021 22:24:31 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:32 UTC | Mon, 16 Aug 2021 22:24:35 UTC |
	|         | no-preload-20210816221555-6487                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:36 UTC | Mon, 16 Aug 2021 22:24:36 UTC |
	|         | no-preload-20210816221555-6487                             |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20210816222436-6487 --memory=2200            | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:36 UTC | Mon, 16 Aug 2021 22:25:25 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:25 UTC | Mon, 16 Aug 2021 22:25:25 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:25 UTC | Mon, 16 Aug 2021 22:25:46 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:46 UTC | Mon, 16 Aug 2021 22:25:46 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20210816222436-6487 --memory=2200            | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:46 UTC | Mon, 16 Aug 2021 22:26:11 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:26:12 UTC | Mon, 16 Aug 2021 22:26:12 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:25:46
	Running on machine: debian-jenkins-agent-13
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:25:46.856773  262957 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:25:46.856848  262957 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:25:46.856858  262957 out.go:311] Setting ErrFile to fd 2...
	I0816 22:25:46.856861  262957 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:25:46.856963  262957 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:25:46.857212  262957 out.go:305] Setting JSON to false
	I0816 22:25:46.893957  262957 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-13","uptime":3914,"bootTime":1629148833,"procs":365,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:25:46.894067  262957 start.go:121] virtualization: kvm guest
	I0816 22:25:46.896379  262957 out.go:177] * [newest-cni-20210816222436-6487] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:25:46.897973  262957 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:25:46.896522  262957 notify.go:169] Checking for updates...
	I0816 22:25:46.899468  262957 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:25:46.900988  262957 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:25:46.902492  262957 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:25:46.902900  262957 config.go:177] Loaded profile config "newest-cni-20210816222436-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0816 22:25:46.903274  262957 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:25:46.950656  262957 docker.go:132] docker version: linux-19.03.15
	I0816 22:25:46.950732  262957 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:25:47.034524  262957 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-16 22:25:46.986320519 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 22:25:47.034654  262957 docker.go:244] overlay module found
	I0816 22:25:47.037282  262957 out.go:177] * Using the docker driver based on existing profile
	I0816 22:25:47.037307  262957 start.go:278] selected driver: docker
	I0816 22:25:47.037313  262957 start.go:751] validating driver "docker" against &{Name:newest-cni-20210816222436-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816222436-6487 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true
extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:25:47.037417  262957 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0816 22:25:47.037459  262957 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:25:47.037480  262957 out.go:242] ! Your cgroup does not allow setting memory.
	I0816 22:25:47.039083  262957 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:25:47.040150  262957 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:25:47.119162  262957 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-16 22:25:47.075605257 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0816 22:25:47.119274  262957 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:25:47.119298  262957 out.go:242] ! Your cgroup does not allow setting memory.
	I0816 22:25:47.121212  262957 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:25:47.121330  262957 start_flags.go:716] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0816 22:25:47.121355  262957 cni.go:93] Creating CNI manager for ""
	I0816 22:25:47.121364  262957 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:25:47.121376  262957 start_flags.go:277] config:
	{Name:newest-cni-20210816222436-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816222436-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m
0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:25:47.123081  262957 out.go:177] * Starting control plane node newest-cni-20210816222436-6487 in cluster newest-cni-20210816222436-6487
	I0816 22:25:47.123113  262957 cache.go:117] Beginning downloading kic base image for docker with crio
	I0816 22:25:47.124788  262957 out.go:177] * Pulling base image ...
	I0816 22:25:47.124814  262957 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0816 22:25:47.124838  262957 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0816 22:25:47.124853  262957 cache.go:56] Caching tarball of preloaded images
	I0816 22:25:47.124910  262957 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0816 22:25:47.125039  262957 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 22:25:47.125058  262957 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on crio
	I0816 22:25:47.125170  262957 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/config.json ...
	I0816 22:25:47.212531  262957 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0816 22:25:47.212557  262957 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0816 22:25:47.212577  262957 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:25:47.212610  262957 start.go:313] acquiring machines lock for newest-cni-20210816222436-6487: {Name:mkd90dd1df90e2f23e61f524a3ae6e1a65dd1b39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:47.212710  262957 start.go:317] acquired machines lock for "newest-cni-20210816222436-6487" in 80.626µs
	I0816 22:25:47.212739  262957 start.go:93] Skipping create...Using existing machine configuration
	I0816 22:25:47.212748  262957 fix.go:55] fixHost starting: 
	I0816 22:25:47.212988  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:25:47.251771  262957 fix.go:108] recreateIfNeeded on newest-cni-20210816222436-6487: state=Stopped err=<nil>
	W0816 22:25:47.251798  262957 fix.go:134] unexpected machine state, will restart: <nil>
	I0816 22:25:44.995113  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:46.995369  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:47.650229  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:49.650872  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:47.254057  262957 out.go:177] * Restarting existing docker container for "newest-cni-20210816222436-6487" ...
	I0816 22:25:47.254120  262957 cli_runner.go:115] Run: docker start newest-cni-20210816222436-6487
	I0816 22:25:48.586029  262957 cli_runner.go:168] Completed: docker start newest-cni-20210816222436-6487: (1.33187871s)
	I0816 22:25:48.586111  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:25:48.626755  262957 kic.go:420] container "newest-cni-20210816222436-6487" state is running.
	I0816 22:25:48.627256  262957 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210816222436-6487
	I0816 22:25:48.670009  262957 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/config.json ...
	I0816 22:25:48.670233  262957 machine.go:88] provisioning docker machine ...
	I0816 22:25:48.670255  262957 ubuntu.go:169] provisioning hostname "newest-cni-20210816222436-6487"
	I0816 22:25:48.670309  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:48.711043  262957 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:48.711197  262957 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32969 <nil> <nil>}
	I0816 22:25:48.711217  262957 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20210816222436-6487 && echo "newest-cni-20210816222436-6487" | sudo tee /etc/hostname
	I0816 22:25:48.711815  262957 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48490->127.0.0.1:32969: read: connection reset by peer
	I0816 22:25:49.495358  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:51.496029  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:53.497145  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:51.907195  262957 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20210816222436-6487
	
	I0816 22:25:51.907262  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:51.946396  262957 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:51.946596  262957 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32969 <nil> <nil>}
	I0816 22:25:51.946627  262957 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20210816222436-6487' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20210816222436-6487/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20210816222436-6487' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:25:52.071168  262957 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:25:52.071196  262957 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem ServerCertRemotePath:/e
tc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:25:52.071223  262957 ubuntu.go:177] setting up certificates
	I0816 22:25:52.071234  262957 provision.go:83] configureAuth start
	I0816 22:25:52.071275  262957 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210816222436-6487
	I0816 22:25:52.110550  262957 provision.go:138] copyHostCerts
	I0816 22:25:52.110621  262957 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:25:52.110633  262957 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:25:52.110696  262957 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:25:52.110798  262957 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:25:52.110811  262957 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:25:52.110835  262957 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:25:52.110969  262957 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:25:52.110981  262957 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:25:52.111006  262957 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0816 22:25:52.111059  262957 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20210816222436-6487 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20210816222436-6487]
	I0816 22:25:52.355600  262957 provision.go:172] copyRemoteCerts
	I0816 22:25:52.355664  262957 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:25:52.355720  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:52.396113  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:25:52.486667  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:25:52.503265  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0816 22:25:52.518138  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 22:25:52.533106  262957 provision.go:86] duration metric: configureAuth took 461.862959ms
	I0816 22:25:52.533124  262957 ubuntu.go:193] setting minikube options for container-runtime
	I0816 22:25:52.533292  262957 config.go:177] Loaded profile config "newest-cni-20210816222436-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0816 22:25:52.533391  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:52.573329  262957 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:52.573496  262957 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32969 <nil> <nil>}
	I0816 22:25:52.573517  262957 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 22:25:52.991954  262957 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 22:25:52.991986  262957 machine.go:91] provisioned docker machine in 4.321739549s
	I0816 22:25:52.991996  262957 start.go:267] post-start starting for "newest-cni-20210816222436-6487" (driver="docker")
	I0816 22:25:52.992007  262957 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:25:52.992069  262957 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:25:52.992113  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:53.032158  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:25:53.123013  262957 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:25:53.125495  262957 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0816 22:25:53.125515  262957 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0816 22:25:53.125523  262957 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0816 22:25:53.125528  262957 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0816 22:25:53.125536  262957 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:25:53.125574  262957 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:25:53.125646  262957 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem -> 64872.pem in /etc/ssl/certs
	I0816 22:25:53.125746  262957 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:25:53.131911  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:25:53.149155  262957 start.go:270] post-start completed in 157.141514ms
	I0816 22:25:53.149220  262957 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 22:25:53.149270  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:53.190433  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:25:53.275867  262957 fix.go:57] fixHost completed within 6.063112205s
	I0816 22:25:53.275893  262957 start.go:80] releasing machines lock for "newest-cni-20210816222436-6487", held for 6.063163627s
	I0816 22:25:53.275995  262957 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210816222436-6487
	I0816 22:25:53.317483  262957 ssh_runner.go:149] Run: systemctl --version
	I0816 22:25:53.317538  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:53.317562  262957 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:25:53.317640  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:53.361517  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:25:53.362854  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:25:53.489151  262957 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0816 22:25:53.499402  262957 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0816 22:25:53.507663  262957 docker.go:153] disabling docker service ...
	I0816 22:25:53.507710  262957 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:25:53.515840  262957 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:25:53.523795  262957 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:25:53.582896  262957 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:25:53.644285  262957 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:25:53.653611  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:25:53.665218  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0816 22:25:53.672674  262957 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0816 22:25:53.672699  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0816 22:25:53.680934  262957 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:25:53.686723  262957 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:25:53.686773  262957 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:25:53.693222  262957 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:25:53.698990  262957 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:25:53.756392  262957 ssh_runner.go:149] Run: sudo systemctl start crio
	I0816 22:25:53.765202  262957 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 22:25:53.765252  262957 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:25:53.768154  262957 start.go:413] Will wait 60s for crictl version
	I0816 22:25:53.768197  262957 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:25:53.794195  262957 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0816 22:25:53.794262  262957 ssh_runner.go:149] Run: crio --version
	I0816 22:25:53.852537  262957 ssh_runner.go:149] Run: crio --version
	I0816 22:25:53.912084  262957 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on CRI-O 1.20.3 ...
	I0816 22:25:53.912160  262957 cli_runner.go:115] Run: docker network inspect newest-cni-20210816222436-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 22:25:53.950048  262957 ssh_runner.go:149] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0816 22:25:53.953262  262957 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:25:53.963781  262957 out.go:177]   - kubelet.network-plugin=cni
	I0816 22:25:52.154162  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:54.650414  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:53.965324  262957 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0816 22:25:53.965406  262957 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0816 22:25:53.965459  262957 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:25:53.993612  262957 crio.go:424] all images are preloaded for cri-o runtime.
	I0816 22:25:53.993630  262957 crio.go:333] Images already preloaded, skipping extraction
	I0816 22:25:53.993667  262957 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:25:54.020097  262957 crio.go:424] all images are preloaded for cri-o runtime.
	I0816 22:25:54.020118  262957 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:25:54.020180  262957 ssh_runner.go:149] Run: crio config
	I0816 22:25:54.082979  262957 cni.go:93] Creating CNI manager for ""
	I0816 22:25:54.083003  262957 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:25:54.083013  262957 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0816 22:25:54.083024  262957 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20210816222436-6487 NodeName:newest-cni-20210816222436-6487 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:fa
lse] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:25:54.083168  262957 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "newest-cni-20210816222436-6487"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:25:54.083284  262957 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20210816222436-6487 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816222436-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0816 22:25:54.083346  262957 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0816 22:25:54.090012  262957 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:25:54.090068  262957 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:25:54.096369  262957 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (603 bytes)
	I0816 22:25:54.107861  262957 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0816 22:25:54.119303  262957 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I0816 22:25:54.130633  262957 ssh_runner.go:149] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0816 22:25:54.133217  262957 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:25:54.141396  262957 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487 for IP: 192.168.67.2
	I0816 22:25:54.141447  262957 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:25:54.141471  262957 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:25:54.141535  262957 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/client.key
	I0816 22:25:54.141563  262957 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/apiserver.key.c7fa3a9e
	I0816 22:25:54.141596  262957 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/proxy-client.key
	I0816 22:25:54.141717  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem (1338 bytes)
	W0816 22:25:54.141762  262957 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487_empty.pem, impossibly tiny 0 bytes
	I0816 22:25:54.141774  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 22:25:54.141803  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:25:54.141827  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:25:54.141848  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0816 22:25:54.141897  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:25:54.142744  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:25:54.158540  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 22:25:54.174181  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:25:54.190076  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 22:25:54.205410  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:25:54.220130  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 22:25:54.235298  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:25:54.251605  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 22:25:54.268123  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /usr/share/ca-certificates/64872.pem (1708 bytes)
	I0816 22:25:54.283499  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:25:54.298583  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem --> /usr/share/ca-certificates/6487.pem (1338 bytes)
	I0816 22:25:54.314024  262957 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:25:54.325021  262957 ssh_runner.go:149] Run: openssl version
	I0816 22:25:54.329401  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/64872.pem && ln -fs /usr/share/ca-certificates/64872.pem /etc/ssl/certs/64872.pem"
	I0816 22:25:54.335940  262957 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/64872.pem
	I0816 22:25:54.338596  262957 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:50 /usr/share/ca-certificates/64872.pem
	I0816 22:25:54.338638  262957 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/64872.pem
	I0816 22:25:54.342906  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/64872.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:25:54.348826  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:25:54.358858  262957 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:25:54.361641  262957 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:25:54.361673  262957 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:25:54.365977  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:25:54.372154  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6487.pem && ln -fs /usr/share/ca-certificates/6487.pem /etc/ssl/certs/6487.pem"
	I0816 22:25:54.378858  262957 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6487.pem
	I0816 22:25:54.381576  262957 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:50 /usr/share/ca-certificates/6487.pem
	I0816 22:25:54.381623  262957 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6487.pem
	I0816 22:25:54.386036  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6487.pem /etc/ssl/certs/51391683.0"
	I0816 22:25:54.391898  262957 kubeadm.go:390] StartCluster: {Name:newest-cni-20210816222436-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816222436-6487 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:
false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:25:54.392022  262957 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 22:25:54.392052  262957 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:25:54.414245  262957 cri.go:76] found id: ""
	I0816 22:25:54.414284  262957 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:25:54.420413  262957 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0816 22:25:54.420436  262957 kubeadm.go:600] restartCluster start
	I0816 22:25:54.420466  262957 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0816 22:25:54.426072  262957 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:54.426966  262957 kubeconfig.go:117] verify returned: extract IP: "newest-cni-20210816222436-6487" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:25:54.427382  262957 kubeconfig.go:128] "newest-cni-20210816222436-6487" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0816 22:25:54.428106  262957 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:25:54.430425  262957 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 22:25:54.436260  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:54.436301  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:54.447743  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:54.648124  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:54.648202  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:54.661570  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:54.848823  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:54.848884  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:54.862082  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.048130  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:55.048196  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:55.061645  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.247861  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:55.247956  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:55.262026  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.448347  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:55.448414  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:55.461467  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.648695  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:55.648774  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:55.661684  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.847947  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:55.848042  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:55.862542  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:56.048736  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:56.048800  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:56.061836  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:56.248110  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:56.248200  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:56.261360  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:56.448639  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:56.448705  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:56.461500  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:56.648623  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:56.648703  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:56.662181  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.995370  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:58.495829  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:56.651402  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:58.651440  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:56.848603  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:56.848665  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:56.861212  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.048524  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:57.048591  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:57.061580  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.248828  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:57.248911  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:57.261828  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.448121  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:57.448188  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:57.461171  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.461189  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:57.461225  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:57.512239  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.512268  262957 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0816 22:25:57.512276  262957 kubeadm.go:1032] stopping kube-system containers ...
	I0816 22:25:57.512288  262957 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 22:25:57.512336  262957 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:25:57.536298  262957 cri.go:76] found id: ""
	I0816 22:25:57.536370  262957 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0816 22:25:57.545155  262957 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:25:57.551792  262957 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5643 Aug 16 22:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug 16 22:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Aug 16 22:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 16 22:24 /etc/kubernetes/scheduler.conf
	
	I0816 22:25:57.551856  262957 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 22:25:57.558184  262957 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 22:25:57.564274  262957 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 22:25:57.570245  262957 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.570290  262957 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 22:25:57.576131  262957 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 22:25:57.582547  262957 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.582595  262957 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 22:25:57.588511  262957 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:25:57.594494  262957 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0816 22:25:57.594510  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:57.636811  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:58.457317  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:58.574142  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:58.631732  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:58.680441  262957 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:25:58.680500  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:59.195406  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:59.695787  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:00.195739  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:00.694833  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:01.194883  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:01.695030  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:00.496189  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:02.994975  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:01.151777  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:03.650774  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:02.195405  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:02.695613  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:03.195523  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:03.695735  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:04.195172  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:04.695313  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:05.194844  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:05.228254  262957 api_server.go:70] duration metric: took 6.54781428s to wait for apiserver process to appear ...
	I0816 22:26:05.228278  262957 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:26:05.228288  262957 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:26:04.995640  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:06.995858  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:08.521501  262957 api_server.go:265] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:26:08.521534  262957 api_server.go:101] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:26:09.022198  262957 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:26:09.028317  262957 api_server.go:265] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:26:09.028345  262957 api_server.go:101] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:26:09.521603  262957 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:26:09.526189  262957 api_server.go:265] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:26:09.526218  262957 api_server.go:101] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:26:10.021661  262957 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:26:10.027811  262957 api_server.go:265] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0816 22:26:10.035180  262957 api_server.go:139] control plane version: v1.22.0-rc.0
	I0816 22:26:10.035212  262957 api_server.go:129] duration metric: took 4.806927084s to wait for apiserver health ...
	I0816 22:26:10.035225  262957 cni.go:93] Creating CNI manager for ""
	I0816 22:26:10.035233  262957 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:26:09.964461  238595 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (30.898528802s)
	I0816 22:26:09.964528  238595 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:26:09.973997  238595 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 22:26:09.974062  238595 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:26:09.999861  238595 cri.go:76] found id: ""
	I0816 22:26:09.999951  238595 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:26:10.007018  238595 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0816 22:26:10.007067  238595 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:26:10.013415  238595 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:26:10.013459  238595 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0816 22:26:05.657515  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:08.152187  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:10.115193  262957 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 22:26:10.115266  262957 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0816 22:26:10.120908  262957 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl ...
	I0816 22:26:10.120933  262957 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0816 22:26:10.134935  262957 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 22:26:10.353050  262957 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:26:10.366285  262957 system_pods.go:59] 10 kube-system pods found
	I0816 22:26:10.366331  262957 system_pods.go:61] "coredns-78fcd69978-nqx44" [6fe4486f-609a-4711-8984-d211fafbc14a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:26:10.366343  262957 system_pods.go:61] "coredns-78fcd69978-sh8hf" [99ca4da4-63c0-4eb5-b1a9-824580994bf0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:26:10.366357  262957 system_pods.go:61] "etcd-newest-cni-20210816222436-6487" [3161f296-8eb0-48ae-8bbd-1a3104e0c5cc] Running
	I0816 22:26:10.366369  262957 system_pods.go:61] "kindnet-4wtm6" [f784c344-70ae-41f8-b749-4bd3d26179d1] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0816 22:26:10.366379  262957 system_pods.go:61] "kube-apiserver-newest-cni-20210816222436-6487" [b48447a9-bba3-4ef7-89ab-ee9a020ac10b] Running
	I0816 22:26:10.366393  262957 system_pods.go:61] "kube-controller-manager-newest-cni-20210816222436-6487" [0d5a549e-612d-463d-9383-0ee0d9dd2a5c] Running
	I0816 22:26:10.366402  262957 system_pods.go:61] "kube-proxy-242br" [91a06e4b-7a8f-4f7c-a698-3f40c4024f1f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 22:26:10.366411  262957 system_pods.go:61] "kube-scheduler-newest-cni-20210816222436-6487" [c46e8919-9bc3-4dbc-ba13-c0d0b8c2ee7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 22:26:10.366419  262957 system_pods.go:61] "metrics-server-7c784ccb57-j52xp" [8b98c23d-9fa2-44dd-b9af-b1bf3215cd88] Pending
	I0816 22:26:10.366427  262957 system_pods.go:61] "storage-provisioner" [a71ed147-1a32-4360-9bcc-722db25ff42e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:26:10.366434  262957 system_pods.go:74] duration metric: took 13.36244ms to wait for pod list to return data ...
	I0816 22:26:10.366443  262957 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:26:10.369938  262957 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:26:10.369965  262957 node_conditions.go:123] node cpu capacity is 8
	I0816 22:26:10.369980  262957 node_conditions.go:105] duration metric: took 3.531866ms to run NodePressure ...
	I0816 22:26:10.370000  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:26:10.628407  262957 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:26:10.646464  262957 ops.go:34] apiserver oom_adj: -16
	I0816 22:26:10.646488  262957 kubeadm.go:604] restartCluster took 16.226044614s
	I0816 22:26:10.646497  262957 kubeadm.go:392] StartCluster complete in 16.254606324s
	I0816 22:26:10.646519  262957 settings.go:142] acquiring lock: {Name:mk71c9e00e0a208f5191d6b85d29a074b46503a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:26:10.646648  262957 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:26:10.648250  262957 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:26:10.653233  262957 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20210816222436-6487" rescaled to 1
	I0816 22:26:10.653302  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:26:10.653319  262957 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:26:10.653298  262957 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0816 22:26:10.653366  262957 addons.go:59] Setting storage-provisioner=true in profile "newest-cni-20210816222436-6487"
	I0816 22:26:10.653381  262957 addons.go:135] Setting addon storage-provisioner=true in "newest-cni-20210816222436-6487"
	W0816 22:26:10.653387  262957 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:26:10.653413  262957 host.go:66] Checking if "newest-cni-20210816222436-6487" exists ...
	I0816 22:26:10.653421  262957 addons.go:59] Setting default-storageclass=true in profile "newest-cni-20210816222436-6487"
	I0816 22:26:10.653452  262957 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20210816222436-6487"
	I0816 22:26:10.653502  262957 config.go:177] Loaded profile config "newest-cni-20210816222436-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0816 22:26:10.653557  262957 addons.go:59] Setting metrics-server=true in profile "newest-cni-20210816222436-6487"
	I0816 22:26:10.653574  262957 addons.go:135] Setting addon metrics-server=true in "newest-cni-20210816222436-6487"
	W0816 22:26:10.653581  262957 addons.go:147] addon metrics-server should already be in state true
	I0816 22:26:10.653607  262957 host.go:66] Checking if "newest-cni-20210816222436-6487" exists ...
	I0816 22:26:10.653741  262957 addons.go:59] Setting dashboard=true in profile "newest-cni-20210816222436-6487"
	I0816 22:26:10.653767  262957 addons.go:135] Setting addon dashboard=true in "newest-cni-20210816222436-6487"
	W0816 22:26:10.653776  262957 addons.go:147] addon dashboard should already be in state true
	I0816 22:26:10.653788  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:10.653811  262957 host.go:66] Checking if "newest-cni-20210816222436-6487" exists ...
	I0816 22:26:10.653952  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:10.654110  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:10.655941  262957 out.go:177] * Verifying Kubernetes components...
	I0816 22:26:10.654275  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:10.656048  262957 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:26:10.718128  262957 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:26:10.718268  262957 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:26:10.718285  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:26:10.718346  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:26:10.724823  262957 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:26:10.728589  262957 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:26:10.728694  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:26:10.728708  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:26:10.728778  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:26:10.372762  238595 out.go:204]   - Generating certificates and keys ...
	I0816 22:26:10.735593  262957 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:26:10.735667  262957 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:26:10.735676  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:26:10.735731  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:26:10.736243  262957 addons.go:135] Setting addon default-storageclass=true in "newest-cni-20210816222436-6487"
	W0816 22:26:10.736275  262957 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:26:10.736305  262957 host.go:66] Checking if "newest-cni-20210816222436-6487" exists ...
	I0816 22:26:10.736853  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:10.773177  262957 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:26:10.773242  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:10.774623  262957 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0816 22:26:10.789622  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:26:10.795332  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:26:10.807716  262957 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:26:10.807742  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:26:10.807797  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:26:10.818897  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:26:10.828432  262957 api_server.go:70] duration metric: took 175.046767ms to wait for apiserver process to appear ...
	I0816 22:26:10.828463  262957 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:26:10.828475  262957 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:26:10.835641  262957 api_server.go:265] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0816 22:26:10.836517  262957 api_server.go:139] control plane version: v1.22.0-rc.0
	I0816 22:26:10.836536  262957 api_server.go:129] duration metric: took 8.066334ms to wait for apiserver health ...
	I0816 22:26:10.836544  262957 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:26:10.844801  262957 system_pods.go:59] 10 kube-system pods found
	I0816 22:26:10.844830  262957 system_pods.go:61] "coredns-78fcd69978-nqx44" [6fe4486f-609a-4711-8984-d211fafbc14a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:26:10.844841  262957 system_pods.go:61] "coredns-78fcd69978-sh8hf" [99ca4da4-63c0-4eb5-b1a9-824580994bf0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:26:10.844849  262957 system_pods.go:61] "etcd-newest-cni-20210816222436-6487" [3161f296-8eb0-48ae-8bbd-1a3104e0c5cc] Running
	I0816 22:26:10.844862  262957 system_pods.go:61] "kindnet-4wtm6" [f784c344-70ae-41f8-b749-4bd3d26179d1] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0816 22:26:10.844871  262957 system_pods.go:61] "kube-apiserver-newest-cni-20210816222436-6487" [b48447a9-bba3-4ef7-89ab-ee9a020ac10b] Running
	I0816 22:26:10.844881  262957 system_pods.go:61] "kube-controller-manager-newest-cni-20210816222436-6487" [0d5a549e-612d-463d-9383-0ee0d9dd2a5c] Running
	I0816 22:26:10.844892  262957 system_pods.go:61] "kube-proxy-242br" [91a06e4b-7a8f-4f7c-a698-3f40c4024f1f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 22:26:10.844903  262957 system_pods.go:61] "kube-scheduler-newest-cni-20210816222436-6487" [c46e8919-9bc3-4dbc-ba13-c0d0b8c2ee7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 22:26:10.844920  262957 system_pods.go:61] "metrics-server-7c784ccb57-j52xp" [8b98c23d-9fa2-44dd-b9af-b1bf3215cd88] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:26:10.844930  262957 system_pods.go:61] "storage-provisioner" [a71ed147-1a32-4360-9bcc-722db25ff42e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:26:10.844937  262957 system_pods.go:74] duration metric: took 8.387271ms to wait for pod list to return data ...
	I0816 22:26:10.844948  262957 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:26:10.847353  262957 default_sa.go:45] found service account: "default"
	I0816 22:26:10.847370  262957 default_sa.go:55] duration metric: took 2.413533ms for default service account to be created ...
	I0816 22:26:10.847380  262957 kubeadm.go:547] duration metric: took 194.000457ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0816 22:26:10.847401  262957 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:26:10.849463  262957 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:26:10.849480  262957 node_conditions.go:123] node cpu capacity is 8
	I0816 22:26:10.849495  262957 node_conditions.go:105] duration metric: took 2.085396ms to run NodePressure ...
	I0816 22:26:10.849509  262957 start.go:231] waiting for startup goroutines ...
	I0816 22:26:10.862435  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:26:10.919082  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:26:10.919107  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:26:10.928768  262957 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:26:10.928790  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:26:10.936226  262957 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:26:10.939559  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:26:10.939580  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:26:10.947378  262957 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:26:10.947440  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:26:10.956321  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:26:10.956344  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:26:10.959897  262957 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:26:11.016118  262957 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:26:11.016141  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:26:11.021575  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:26:11.021599  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:26:11.031871  262957 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:26:11.038943  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:26:11.038964  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:26:11.137497  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:26:11.137523  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:26:11.217513  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:26:11.217538  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:26:11.232958  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:26:11.232983  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:26:11.248587  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:26:11.248612  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:26:11.327831  262957 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:26:11.543579  262957 addons.go:313] Verifying addon metrics-server=true in "newest-cni-20210816222436-6487"
	I0816 22:26:11.719432  262957 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0816 22:26:11.719457  262957 addons.go:344] enableAddons completed in 1.066141103s
	I0816 22:26:11.764284  262957 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0816 22:26:11.765766  262957 out.go:177] 
	W0816 22:26:11.765889  262957 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0816 22:26:11.767364  262957 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0816 22:26:11.768745  262957 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210816222436-6487" cluster and "default" namespace by default
	I0816 22:26:10.885563  238595 out.go:204]   - Booting up control plane ...
	I0816 22:26:09.496573  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:11.995982  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:10.651451  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:13.151979  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:15.153271  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:13.996111  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:16.495078  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:18.495570  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:17.651242  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:20.151935  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:20.496216  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:22.996199  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:24.933950  238595 out.go:204]   - Configuring RBAC rules ...
	I0816 22:26:25.352976  238595 cni.go:93] Creating CNI manager for ""
	I0816 22:26:25.353002  238595 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:26:22.650222  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:23.146708  240293 pod_ready.go:81] duration metric: took 4m0.400635585s waiting for pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace to be "Ready" ...
	E0816 22:26:23.146730  240293 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 22:26:23.146749  240293 pod_ready.go:38] duration metric: took 4m42.319875628s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:26:23.146776  240293 kubeadm.go:604] restartCluster took 4m59.914882197s
	W0816 22:26:23.146936  240293 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0816 22:26:23.146993  240293 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 22:26:25.355246  238595 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 22:26:25.355311  238595 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0816 22:26:25.358718  238595 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0816 22:26:25.358738  238595 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0816 22:26:25.370945  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 22:26:25.621157  238595 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:26:25.621206  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:25.621226  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=default-k8s-different-port-20210816221939-6487 minikube.k8s.io/updated_at=2021_08_16T22_26_25_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:25.733924  238595 ops.go:34] apiserver oom_adj: -16
	I0816 22:26:25.733912  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:26.298743  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:26.798723  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:27.298752  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:24.996387  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:27.495135  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Mon 2021-08-16 22:25:48 UTC, end at Mon 2021-08-16 22:26:32 UTC. --
	Aug 16 22:26:09 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:09.919511998Z" level=info msg="Created container 13ea21137eecc1b006615bde88ce68cb748b6081ec476b11a4f1c42824c05e82: kube-system/kindnet-4wtm6/kindnet-cni" id=ea2e3713-f8c8-4b5f-92d8-661a454f51cd name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:26:09 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:09.920164222Z" level=info msg="Starting container: 13ea21137eecc1b006615bde88ce68cb748b6081ec476b11a4f1c42824c05e82" id=33299c18-2b29-4129-8a50-c16b0ae1896a name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:26:09 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:09.920490527Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-242br/POD" id=50d91f6a-327a-48af-af01-89e51f350506 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 16 22:26:09 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:09.931078770Z" level=info msg="Started container 13ea21137eecc1b006615bde88ce68cb748b6081ec476b11a4f1c42824c05e82: kube-system/kindnet-4wtm6/kindnet-cni" id=33299c18-2b29-4129-8a50-c16b0ae1896a name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.116338754Z" level=info msg="Ran pod sandbox ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3 with infra container: kube-system/kube-proxy-242br/POD" id=50d91f6a-327a-48af-af01-89e51f350506 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.117498357Z" level=info msg="Checking image status: k8s.gcr.io/kube-proxy:v1.22.0-rc.0" id=2985f9b5-b86b-45ee-9057-8f3411c06a2b name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.118268346Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,RepoTags:[k8s.gcr.io/kube-proxy:v1.22.0-rc.0],RepoDigests:[k8s.gcr.io/kube-proxy@sha256:494cce2f9dbe9f6f86c5aac1a5b9e3b696500b57a06ce17a8b2aa74c955079c8 k8s.gcr.io/kube-proxy@sha256:d7d96bcbac7bfcb2eec40f086186850c1492540b1feed855f937d68d375d7980],Size_:105449192,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2985f9b5-b86b-45ee-9057-8f3411c06a2b name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.119122662Z" level=info msg="Checking image status: k8s.gcr.io/kube-proxy:v1.22.0-rc.0" id=d758234d-e17f-47b5-a3e7-d5ff95f3c32c name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.119731699Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,RepoTags:[k8s.gcr.io/kube-proxy:v1.22.0-rc.0],RepoDigests:[k8s.gcr.io/kube-proxy@sha256:494cce2f9dbe9f6f86c5aac1a5b9e3b696500b57a06ce17a8b2aa74c955079c8 k8s.gcr.io/kube-proxy@sha256:d7d96bcbac7bfcb2eec40f086186850c1492540b1feed855f937d68d375d7980],Size_:105449192,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=d758234d-e17f-47b5-a3e7-d5ff95f3c32c name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.120608805Z" level=info msg="Creating container: kube-system/kube-proxy-242br/kube-proxy" id=6200b506-a13e-4e28-9fae-7610d3265429 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.267861426Z" level=info msg="Created container 753269be7f7c38f2867ce6b87e47c113544230521c2c9398d83b19782ced1605: kube-system/kube-proxy-242br/kube-proxy" id=6200b506-a13e-4e28-9fae-7610d3265429 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.268449638Z" level=info msg="Starting container: 753269be7f7c38f2867ce6b87e47c113544230521c2c9398d83b19782ced1605" id=8fbdd443-6843-47b2-a02b-af9ea0be365c name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.280946153Z" level=info msg="Started container 753269be7f7c38f2867ce6b87e47c113544230521c2c9398d83b19782ced1605: kube-system/kube-proxy-242br/kube-proxy" id=8fbdd443-6843-47b2-a02b-af9ea0be365c name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.521302094Z" level=info msg="Running pod sandbox: kube-system/storage-provisioner/POD" id=c42277d0-c651-4a4c-b26d-89eb1777ae49 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.730474500Z" level=info msg="Ran pod sandbox 4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5 with infra container: kube-system/storage-provisioner/POD" id=c42277d0-c651-4a4c-b26d-89eb1777ae49 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.732461992Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c1aafcef-096c-44c1-b923-60bc0c443438 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.733195035Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c1aafcef-096c-44c1-b923-60bc0c443438 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.733990959Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4dcb58f3-656b-410e-9384-bfd3e24ba7c7 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.734612329Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=4dcb58f3-656b-410e-9384-bfd3e24ba7c7 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.735573481Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=137cd6d3-4d81-4c2f-8dea-d27a95295cd7 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.747348302Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1fb784df33c89ad75d836672da4c29d797c171f633cb1c7ab33208e134343e35/merged/etc/passwd: no such file or directory"
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.747403698Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1fb784df33c89ad75d836672da4c29d797c171f633cb1c7ab33208e134343e35/merged/etc/group: no such file or directory"
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.917509537Z" level=info msg="Created container f53cda0af472e0179ac68516d7550b20c474108c95d22e22223b7dfa9eda7a15: kube-system/storage-provisioner/storage-provisioner" id=137cd6d3-4d81-4c2f-8dea-d27a95295cd7 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.918155037Z" level=info msg="Starting container: f53cda0af472e0179ac68516d7550b20c474108c95d22e22223b7dfa9eda7a15" id=16bebaa4-e904-4e42-8087-0a748cce6b21 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.930999065Z" level=info msg="Started container f53cda0af472e0179ac68516d7550b20c474108c95d22e22223b7dfa9eda7a15: kube-system/storage-provisioner/storage-provisioner" id=16bebaa4-e904-4e42-8087-0a748cce6b21 name=/runtime.v1alpha2.RuntimeService/StartContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	f53cda0af472e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   21 seconds ago      Exited              storage-provisioner       0                   4162e9d2bc7a1
	753269be7f7c3       ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c   21 seconds ago      Running             kube-proxy                1                   ff5a70388e769
	13ea21137eecc       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb   22 seconds ago      Running             kindnet-cni               1                   2f59203e18012
	fe20235c8dfab       b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a   27 seconds ago      Running             kube-apiserver            1                   cef5c6e154414
	3240bbb3d9227       0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba   27 seconds ago      Running             etcd                      1                   413ba52501c76
	58841932c26d9       cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c   27 seconds ago      Running             kube-controller-manager   1                   a82deb4862d7a
	57e924dea5914       7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75   27 seconds ago      Running             kube-scheduler            1                   23128383f5cf7
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000003] ll header: 00000000: 02 42 33 e7 ff 29 02 42 c0 a8 31 02 08 00        .B3..).B..1...
	[ +21.708046] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth701fb51e
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff b2 9a 4f fd 9e c8 08 06        ........O.....
	[  +0.998722] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev veth1836fdd8
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 1a 35 45 60 a5 0d 08 06        .......5E`....
	[  +0.482687] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethf0451ca7
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 7e f8 87 22 4e 34 08 06        ......~.."N4..
	[  +0.190206] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethead2f008
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 6e 0d 2b 02 21 9d 08 06        ......n.+.!...
	[  +0.644281] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000004] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +0.218276] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000003] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +0.219980] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000003] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +0.463957] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000002] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +0.895921] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000003] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +1.763890] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000003] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +0.831977] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-6a14296a1513
	[  +0.000003] ll header: 00000000: 02 42 33 e7 ff 29 02 42 c0 a8 31 02 08 00        .B3..).B..1...
	[  +2.811776] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000002] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	
	* 
	* ==> etcd [3240bbb3d92273f37b404871473ddb237cebd4ed0410db9db3d7f10a4f130c9d] <==
	* {"level":"info","ts":"2021-08-16T22:26:05.022Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2021-08-16T22:26:05.024Z","caller":"etcdserver/server.go:834","msg":"starting etcd server","local-member-id":"8688e899f7831fc7","local-server-version":"3.5.0","cluster-id":"9d8fdeb88b6def78","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-16T22:26:05.024Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"8688e899f7831fc7","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2021-08-16T22:26:05.024Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2021-08-16T22:26:05.024Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2021-08-16T22:26:05.024Z","caller":"membership/cluster.go:523","msg":"updated cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","from":"3.5","to":"3.5"}
	{"level":"info","ts":"2021-08-16T22:26:05.027Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-08-16T22:26:05.027Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2021-08-16T22:26:05.027Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2021-08-16T22:26:05.027Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-08-16T22:26:05.027Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-08-16T22:26:05.518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2021-08-16T22:26:05.518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2021-08-16T22:26:05.518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2021-08-16T22:26:05.518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2021-08-16T22:26:05.518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2021-08-16T22:26:05.518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2021-08-16T22:26:05.518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2021-08-16T22:26:05.519Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:newest-cni-20210816222436-6487 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2021-08-16T22:26:05.519Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-16T22:26:05.519Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-16T22:26:05.519Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-08-16T22:26:05.519Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-08-16T22:26:05.520Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2021-08-16T22:26:05.520Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  22:26:50 up  1:06,  0 users,  load average: 2.75, 2.65, 2.32
	Linux newest-cni-20210816222436-6487 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [fe20235c8dfabaca60c374e04c6ca0505ac6982ff808879f0cf5404494b46e7a] <==
	* I0816 22:26:09.465882       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0816 22:26:09.470117       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	W0816 22:26:10.323103       1 handler_proxy.go:104] no RequestInfo found in the context
	E0816 22:26:10.323199       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0816 22:26:10.323214       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 22:26:10.346304       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0816 22:26:10.470938       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0816 22:26:10.488300       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0816 22:26:10.554187       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 22:26:10.619796       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0816 22:26:11.021063       1 controller.go:611] quota admission added evaluator for: endpoints
	I0816 22:26:11.615523       1 controller.go:611] quota admission added evaluator for: namespaces
	E0816 22:26:25.045815       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
	E0816 22:26:25.045971       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0816 22:26:25.047126       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0816 22:26:25.048286       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0816 22:26:25.049484       1 trace.go:205] Trace[55985525]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:2f9ce8bf-dbb6-49e5-9d75-e32f2b4e34e9,client:192.168.67.2,accept:application/json, */*,protocol:HTTP/2.0 (16-Aug-2021 22:26:15.049) (total time: 9999ms):
	Trace[55985525]: [9.999476378s] [9.999476378s] END
	E0816 22:26:25.056215       1 timeout.go:135] post-timeout activity - time-elapsed: 10.243423ms, GET "/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath" result: <nil>
	I0816 22:26:50.850587       1 trace.go:205] Trace[759024639]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:500,continue: (16-Aug-2021 22:26:32.152) (total time: 18697ms):
	Trace[759024639]: [18.697843129s] [18.697843129s] END
	E0816 22:26:50.850637       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.Error{e:(*status.Status)(0xc0108b42a0)}: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout
	I0816 22:26:50.850890       1 trace.go:205] Trace[904487519]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:9a179e70-4fd0-462b-bb35-5d0f2c0dc7ef,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (16-Aug-2021 22:26:32.152) (total time: 18698ms):
	Trace[904487519]: [18.698198174s] [18.698198174s] END
	
	* 
	* ==> kube-controller-manager [58841932c26d9bee71f69ef725fd6e6bebfca6f808151590730af393a22c4366] <==
	* I0816 22:26:12.142950       1 ttlafterfinished_controller.go:109] Starting TTL after finished controller
	I0816 22:26:12.142967       1 shared_informer.go:240] Waiting for caches to sync for TTL after finished
	I0816 22:26:12.144608       1 controllermanager.go:577] Started "serviceaccount"
	I0816 22:26:12.144734       1 serviceaccounts_controller.go:117] Starting service account controller
	I0816 22:26:12.144744       1 shared_informer.go:240] Waiting for caches to sync for service account
	I0816 22:26:12.148311       1 controllermanager.go:577] Started "daemonset"
	I0816 22:26:12.148401       1 daemon_controller.go:284] Starting daemon sets controller
	I0816 22:26:12.148457       1 shared_informer.go:240] Waiting for caches to sync for daemon sets
	I0816 22:26:12.150576       1 controllermanager.go:577] Started "replicaset"
	I0816 22:26:12.150693       1 replica_set.go:186] Starting replicaset controller
	I0816 22:26:12.150712       1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
	I0816 22:26:12.152360       1 controllermanager.go:577] Started "csrapproving"
	I0816 22:26:12.152478       1 certificate_controller.go:118] Starting certificate controller "csrapproving"
	I0816 22:26:12.152493       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving
	I0816 22:26:12.154253       1 controllermanager.go:577] Started "statefulset"
	I0816 22:26:12.154380       1 stateful_set.go:148] Starting stateful set controller
	I0816 22:26:12.154406       1 shared_informer.go:240] Waiting for caches to sync for stateful set
	I0816 22:26:12.156021       1 controllermanager.go:577] Started "persistentvolume-binder"
	I0816 22:26:12.156115       1 pv_controller_base.go:308] Starting persistent volume controller
	I0816 22:26:12.156133       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
	I0816 22:26:12.159038       1 controllermanager.go:577] Started "disruption"
	I0816 22:26:12.159152       1 disruption.go:363] Starting disruption controller
	I0816 22:26:12.159169       1 shared_informer.go:240] Waiting for caches to sync for disruption
	I0816 22:26:12.160695       1 node_ipam_controller.go:91] Sending events to api server.
	I0816 22:26:12.221698       1 shared_informer.go:247] Caches are synced for tokens 
	
	* 
	* ==> kube-proxy [753269be7f7c38f2867ce6b87e47c113544230521c2c9398d83b19782ced1605] <==
	* I0816 22:26:10.355577       1 node.go:172] Successfully retrieved node IP: 192.168.67.2
	I0816 22:26:10.355649       1 server_others.go:140] Detected node IP 192.168.67.2
	W0816 22:26:10.355665       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0816 22:26:10.381353       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0816 22:26:10.381411       1 server_others.go:212] Using iptables Proxier.
	I0816 22:26:10.381426       1 server_others.go:219] creating dualStackProxier for iptables.
	W0816 22:26:10.381446       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0816 22:26:10.381755       1 server.go:649] Version: v1.22.0-rc.0
	I0816 22:26:10.382328       1 config.go:315] Starting service config controller
	I0816 22:26:10.382361       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:26:10.382404       1 config.go:224] Starting endpoint slice config controller
	I0816 22:26:10.382426       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	E0816 22:26:10.412645       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"newest-cni-20210816222436-6487.169be9d0237b2576", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03ed76096c98cb5, ext:100367828, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-newest-cni-20210816222436-6487", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Name:
"newest-cni-20210816222436-6487", UID:"newest-cni-20210816222436-6487", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "newest-cni-20210816222436-6487.169be9d0237b2576" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0816 22:26:10.483401       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0816 22:26:10.484094       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [57e924dea59145634016260af8f63a36e510063a313daaa5f72b235d19db5bbc] <==
	* W0816 22:26:04.778787       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0816 22:26:05.520406       1 serving.go:347] Generated self-signed cert in-memory
	I0816 22:26:08.540205       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0816 22:26:08.540236       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 22:26:08.540249       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0816 22:26:08.540253       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 22:26:08.540273       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0816 22:26:08.540302       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0816 22:26:08.540625       1 secure_serving.go:195] Serving securely on 127.0.0.1:10259
	I0816 22:26:08.540696       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0816 22:26:08.640851       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController 
	I0816 22:26:08.640864       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0816 22:26:08.641008       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:25:48 UTC, end at Mon 2021-08-16 22:26:51 UTC. --
	Aug 16 22:26:08 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:08.813522     810 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a71ed147-1a32-4360-9bcc-722db25ff42e-tmp\") pod \"storage-provisioner\" (UID: \"a71ed147-1a32-4360-9bcc-722db25ff42e\") "
	Aug 16 22:26:08 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:08.813548     810 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8b98c23d-9fa2-44dd-b9af-b1bf3215cd88-tmp-dir\") pod \"metrics-server-7c784ccb57-j52xp\" (UID: \"8b98c23d-9fa2-44dd-b9af-b1bf3215cd88\") "
	Aug 16 22:26:08 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:08.813589     810 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzt2l\" (UniqueName: \"kubernetes.io/projected/8b98c23d-9fa2-44dd-b9af-b1bf3215cd88-kube-api-access-mzt2l\") pod \"metrics-server-7c784ccb57-j52xp\" (UID: \"8b98c23d-9fa2-44dd-b9af-b1bf3215cd88\") "
	Aug 16 22:26:08 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:08.813616     810 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91a06e4b-7a8f-4f7c-a698-3f40c4024f1f-xtables-lock\") pod \"kube-proxy-242br\" (UID: \"91a06e4b-7a8f-4f7c-a698-3f40c4024f1f\") "
	Aug 16 22:26:08 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:08.813642     810 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg56l\" (UniqueName: \"kubernetes.io/projected/91a06e4b-7a8f-4f7c-a698-3f40c4024f1f-kube-api-access-sg56l\") pod \"kube-proxy-242br\" (UID: \"91a06e4b-7a8f-4f7c-a698-3f40c4024f1f\") "
	Aug 16 22:26:08 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:08.813672     810 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f784c344-70ae-41f8-b749-4bd3d26179d1-xtables-lock\") pod \"kindnet-4wtm6\" (UID: \"f784c344-70ae-41f8-b749-4bd3d26179d1\") "
	Aug 16 22:26:08 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:08.813698     810 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f784c344-70ae-41f8-b749-4bd3d26179d1-lib-modules\") pod \"kindnet-4wtm6\" (UID: \"f784c344-70ae-41f8-b749-4bd3d26179d1\") "
	Aug 16 22:26:08 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:08.813712     810 reconciler.go:157] "Reconciler: start to sync state"
	Aug 16 22:26:08 newest-cni-20210816222436-6487 kubelet[810]: E0816 22:26:08.835660     810 kubelet.go:2332] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Aug 16 22:26:09 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:09.186909     810 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzbk2\" (UniqueName: \"kubernetes.io/projected/6fe4486f-609a-4711-8984-d211fafbc14a-kube-api-access-qzbk2\") pod \"6fe4486f-609a-4711-8984-d211fafbc14a\" (UID: \"6fe4486f-609a-4711-8984-d211fafbc14a\") "
	Aug 16 22:26:09 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:09.186959     810 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fe4486f-609a-4711-8984-d211fafbc14a-config-volume\") pod \"6fe4486f-609a-4711-8984-d211fafbc14a\" (UID: \"6fe4486f-609a-4711-8984-d211fafbc14a\") "
	Aug 16 22:26:09 newest-cni-20210816222436-6487 kubelet[810]: W0816 22:26:09.187532     810 empty_dir.go:517] Warning: Failed to clear quota on /var/lib/kubelet/pods/6fe4486f-609a-4711-8984-d211fafbc14a/volumes/kubernetes.io~projected/kube-api-access-qzbk2: clearQuota called, but quotas disabled
	Aug 16 22:26:09 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:09.187580     810 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fe4486f-609a-4711-8984-d211fafbc14a-kube-api-access-qzbk2" (OuterVolumeSpecName: "kube-api-access-qzbk2") pod "6fe4486f-609a-4711-8984-d211fafbc14a" (UID: "6fe4486f-609a-4711-8984-d211fafbc14a"). InnerVolumeSpecName "kube-api-access-qzbk2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 16 22:26:09 newest-cni-20210816222436-6487 kubelet[810]: W0816 22:26:09.187718     810 empty_dir.go:517] Warning: Failed to clear quota on /var/lib/kubelet/pods/6fe4486f-609a-4711-8984-d211fafbc14a/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Aug 16 22:26:09 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:09.187831     810 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fe4486f-609a-4711-8984-d211fafbc14a-config-volume" (OuterVolumeSpecName: "config-volume") pod "6fe4486f-609a-4711-8984-d211fafbc14a" (UID: "6fe4486f-609a-4711-8984-d211fafbc14a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 16 22:26:09 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:09.288220     810 reconciler.go:319] "Volume detached for volume \"kube-api-access-qzbk2\" (UniqueName: \"kubernetes.io/projected/6fe4486f-609a-4711-8984-d211fafbc14a-kube-api-access-qzbk2\") on node \"newest-cni-20210816222436-6487\" DevicePath \"\""
	Aug 16 22:26:09 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:09.288262     810 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fe4486f-609a-4711-8984-d211fafbc14a-config-volume\") on node \"newest-cni-20210816222436-6487\" DevicePath \"\""
	Aug 16 22:26:10 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:10.282747     810 request.go:665] Waited for 1.070351149s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/storage-provisioner/token
	Aug 16 22:26:10 newest-cni-20210816222436-6487 kubelet[810]: E0816 22:26:10.842405     810 pod_workers.go:747] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/metrics-server-7c784ccb57-j52xp" podUID=8b98c23d-9fa2-44dd-b9af-b1bf3215cd88
	Aug 16 22:26:10 newest-cni-20210816222436-6487 kubelet[810]: E0816 22:26:10.842508     810 pod_workers.go:747] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-78fcd69978-sh8hf" podUID=99ca4da4-63c0-4eb5-b1a9-824580994bf0
	Aug 16 22:26:11 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:11.841109     810 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=6fe4486f-609a-4711-8984-d211fafbc14a path="/var/lib/kubelet/pods/6fe4486f-609a-4711-8984-d211fafbc14a/volumes"
	Aug 16 22:26:12 newest-cni-20210816222436-6487 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 16 22:26:12 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:12.753868     810 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Aug 16 22:26:12 newest-cni-20210816222436-6487 systemd[1]: kubelet.service: Succeeded.
	Aug 16 22:26:12 newest-cni-20210816222436-6487 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [f53cda0af472e0179ac68516d7550b20c474108c95d22e22223b7dfa9eda7a15] <==
	* 
	goroutine 89 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc00032c310, 0x3)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc00032c300)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc000374480, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc00043af00, 0x18e5530, 0xc00004a100, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00052e0e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00052e0e0, 0x18b3d60, 0xc0004bd4a0, 0x1, 0xc00019a180)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00052e0e0, 0x3b9aca00, 0x0, 0x1, 0xc00019a180)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc00052e0e0, 0x3b9aca00, 0xc00019a180)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:26:50.855824  269003 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout
	 output: "\n** stderr ** \nError from server: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect newest-cni-20210816222436-6487
helpers_test.go:236: (dbg) docker inspect newest-cni-20210816222436-6487:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7eb22a12bc4a4218f635298382b6c8aba5b495377c11b445856bcfc61c1280cc",
	        "Created": "2021-08-16T22:24:38.030106374Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 263235,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-16T22:25:48.579218939Z",
	            "FinishedAt": "2021-08-16T22:25:46.279450757Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/7eb22a12bc4a4218f635298382b6c8aba5b495377c11b445856bcfc61c1280cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7eb22a12bc4a4218f635298382b6c8aba5b495377c11b445856bcfc61c1280cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/7eb22a12bc4a4218f635298382b6c8aba5b495377c11b445856bcfc61c1280cc/hosts",
	        "LogPath": "/var/lib/docker/containers/7eb22a12bc4a4218f635298382b6c8aba5b495377c11b445856bcfc61c1280cc/7eb22a12bc4a4218f635298382b6c8aba5b495377c11b445856bcfc61c1280cc-json.log",
	        "Name": "/newest-cni-20210816222436-6487",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20210816222436-6487:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20210816222436-6487",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dcb7b9b30a6de5fdd887cc60352e20e42877b70743bd671357f119fd686884a4-init/diff:/var/lib/docker/overlay2/26ccb05b45c39eb8fb9a222c35182ae0b2281818311893cc1c820d93f21711ae/diff:/var/lib/docker/overlay2/cd717b1332cc33945d2b9d777346faa34d6bdfa47023daf36a4373532eccf421/diff:/var/lib/docker/overlay2/d9b45e6cc99a52e8f81dfe312587e9103ceac6c138634c90ec0d647cd90dcdc6/diff:/var/lib/docker/overlay2/253a269adcaa7871d92fb351ceb4d32421104cd62f741ba1c63d8e3f2e19a0b5/diff:/var/lib/docker/overlay2/42adedc2a706910fdba591e62cbfab57dd03d85b882f6eef38c3e131a0c52bea/diff:/var/lib/docker/overlay2/77ed2008192a16e91da5c6b9ccc6e71cea338a7cb24d331dc695e5f74fb04573/diff:/var/lib/docker/overlay2/29dccf868a296ce0889c929159fd3ac0dcdc00ba6e2ef864b056335954ffab4d/diff:/var/lib/docker/overlay2/592818f0a9ab8c99c2438562d9c5a479083cb08f41ad30ddefa9b493a8098150/diff:/var/lib/docker/overlay2/05fa44df096fbdb60f935bef69e952369e10cd8f1adc570c170c1f08d09743f7/diff:/var/lib/docker/overlay2/65cf48
1c471f231a663edc6331272317a851b0fd3395b2d6be43899f9b5443f9/diff:/var/lib/docker/overlay2/86a49202a7a2abea1a7a57c52f2e19cf991e9319b11dcf7f4105892413e0096e/diff:/var/lib/docker/overlay2/6f1a2a1805e90af512eef163f484935d57b0a0a72a65b5775c38efb5fe937d13/diff:/var/lib/docker/overlay2/f514eb992d78367550d2ce1c020771f984eff047302d6af3dcd71172270c0087/diff:/var/lib/docker/overlay2/cdae383a0302a2ed835985028caad813d6e16c7caf0e146005b47a4a1e94369b/diff:/var/lib/docker/overlay2/1fe02987c4f18db439e026431c44d1b8b233da766c4dcd331a4c20fd91e88748/diff:/var/lib/docker/overlay2/28bb6b828668bb682f3523f9f71fbd429ef6180cd98455625ea014ef4e532b77/diff:/var/lib/docker/overlay2/2878ae01f9dccb219097c2fc5873be677fe8a70e2bd936d25f018483a3d6c1a3/diff:/var/lib/docker/overlay2/da86efa9bad2d8b46c88440b1c4864add391f7cc9ee07d4227dd37d76c9ffd85/diff:/var/lib/docker/overlay2/93bf17ee1f2be70c70aa38b3aea9daa3a62e189181d1aa56acc52f5cf90f7436/diff:/var/lib/docker/overlay2/e8db603cd7ef789cc310bf7782375ee624d80ac0bceb3a3075900e67540c152a/diff:/var/lib/d
ocker/overlay2/1ba1c50d72480f49b835b34db6c3b21bccdf6530a8201631099a7e287dbf94a6/diff:/var/lib/docker/overlay2/2eeb0aecf508bc4a797d240b330bfa5b54b84a085737a96dad9faf9ea129d4ce/diff:/var/lib/docker/overlay2/40ba0dc970df76cce520350594f86384e121b47a190106559752e8f7cf138540/diff:/var/lib/docker/overlay2/8b80e2d0e53fb136919fd9ca7d4d198a92c324a19aa1827fcd672a3aa49a0dcc/diff:/var/lib/docker/overlay2/bd1d10c9a8876242ad61e3841c439177c3ae3a1967e5b053729107676134c622/diff:/var/lib/docker/overlay2/1f65ee5880c0ce6e027f2e72118cb956fcdcba9bb0806d98de6532bcbdfac6dd/diff:/var/lib/docker/overlay2/8bde131df5f26fa8ad65d4ddcb9ddb84de567427ffeb4aee76487489e530ef77/diff:/var/lib/docker/overlay2/73a68cda04f65298e5bd2581f0f16a8de96cd42145968d5c39b0c8ae3228a423/diff:/var/lib/docker/overlay2/0acfb650c0dfd3b9df005fe48f305b155d61d21914271c258e95bf42636e9bf8/diff:/var/lib/docker/overlay2/33e44b42ef88ffe285cb67e240248a629167dd1c83362d51a67679cbd866c30b/diff:/var/lib/docker/overlay2/c71ddf9e0475e4a3987bf3d616723e9d055a79c4bb6d92cf34fc67be2d9
c5134/diff:/var/lib/docker/overlay2/faece61d8cf45bab42cd499befcdb8d4548229311b35600cced068a3ce186025/diff:/var/lib/docker/overlay2/5e45db6ab7620f8baf25ff5ff01c806cd64575cdc89e5bb2965a25800093d5f9/diff:/var/lib/docker/overlay2/7324fe47bf39776cd61539f09f63fbb6f5e17aedc985bcc48bc12cdaaff4c4e4/diff:/var/lib/docker/overlay2/98a50fa9fa33654b2ed4ce8f7118f04d7004260519abca0d403b2ab337e9c273/diff:/var/lib/docker/overlay2/0e38707c109f3f1c54c7190021d525f898a86b07f2e51c86dc3e6d916eff3ed7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dcb7b9b30a6de5fdd887cc60352e20e42877b70743bd671357f119fd686884a4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dcb7b9b30a6de5fdd887cc60352e20e42877b70743bd671357f119fd686884a4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dcb7b9b30a6de5fdd887cc60352e20e42877b70743bd671357f119fd686884a4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20210816222436-6487",
	                "Source": "/var/lib/docker/volumes/newest-cni-20210816222436-6487/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20210816222436-6487",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20210816222436-6487",
	                "name.minikube.sigs.k8s.io": "newest-cni-20210816222436-6487",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2b84a8f9e1cc416eb4c4f9ff52b1d5e69a8c2f0440846f81250334b54f9ed210",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32969"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32968"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32965"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32967"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32966"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2b84a8f9e1cc",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20210816222436-6487": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7eb22a12bc4a"
	                    ],
	                    "NetworkID": "4afc94db92aaa17c48dae232abb8ef915eb989a2e4b5988b73079293b4e62510",
	                    "EndpointID": "bdf5a3354e5a9ddd36f3f991dcb8d14b7e34b2f581ee2d461103ff25a372dfa4",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210816222436-6487 -n newest-cni-20210816222436-6487

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210816222436-6487 -n newest-cni-20210816222436-6487: exit status 2 (15.766246388s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:27:06.917446  271491 status.go:422] Error apiserver status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-20210816222436-6487 logs -n 25
E0816 22:27:14.499181    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p newest-cni-20210816222436-6487 logs -n 25: exit status 110 (1m0.760349538s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| profile | list --output json                                         | minikube                                       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:35 UTC | Mon, 16 Aug 2021 22:19:38 UTC |
	| delete  | -p pause-20210816221349-6487                               | pause-20210816221349-6487                      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:38 UTC | Mon, 16 Aug 2021 22:19:38 UTC |
	| delete  | -p                                                         | disable-driver-mounts-20210816221938-6487      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:38 UTC | Mon, 16 Aug 2021 22:19:39 UTC |
	|         | disable-driver-mounts-20210816221938-6487                  |                                                |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:39 UTC | Mon, 16 Aug 2021 22:20:32 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:40 UTC | Mon, 16 Aug 2021 22:20:41 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:13 UTC | Mon, 16 Aug 2021 22:20:45 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:53 UTC | Mon, 16 Aug 2021 22:20:54 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:41 UTC | Mon, 16 Aug 2021 22:21:02 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:02 UTC | Mon, 16 Aug 2021 22:21:02 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| stop    | -p                                                         | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:54 UTC | Mon, 16 Aug 2021 22:21:15 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:15 UTC | Mon, 16 Aug 2021 22:21:15 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210816221555-6487                          | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:18:20 UTC | Mon, 16 Aug 2021 22:24:11 UTC |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:26 UTC | Mon, 16 Aug 2021 22:24:26 UTC |
	|         | no-preload-20210816221555-6487                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| -p      | no-preload-20210816221555-6487                             | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:29 UTC | Mon, 16 Aug 2021 22:24:29 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | no-preload-20210816221555-6487                             | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:30 UTC | Mon, 16 Aug 2021 22:24:31 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:32 UTC | Mon, 16 Aug 2021 22:24:35 UTC |
	|         | no-preload-20210816221555-6487                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:36 UTC | Mon, 16 Aug 2021 22:24:36 UTC |
	|         | no-preload-20210816221555-6487                             |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20210816222436-6487 --memory=2200            | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:36 UTC | Mon, 16 Aug 2021 22:25:25 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:25 UTC | Mon, 16 Aug 2021 22:25:25 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:25 UTC | Mon, 16 Aug 2021 22:25:46 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:46 UTC | Mon, 16 Aug 2021 22:25:46 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20210816222436-6487 --memory=2200            | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:46 UTC | Mon, 16 Aug 2021 22:26:11 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:26:12 UTC | Mon, 16 Aug 2021 22:26:12 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:02 UTC | Mon, 16 Aug 2021 22:26:46 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:26:56 UTC | Mon, 16 Aug 2021 22:26:57 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:25:46
	Running on machine: debian-jenkins-agent-13
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:25:46.856773  262957 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:25:46.856848  262957 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:25:46.856858  262957 out.go:311] Setting ErrFile to fd 2...
	I0816 22:25:46.856861  262957 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:25:46.856963  262957 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:25:46.857212  262957 out.go:305] Setting JSON to false
	I0816 22:25:46.893957  262957 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-13","uptime":3914,"bootTime":1629148833,"procs":365,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:25:46.894067  262957 start.go:121] virtualization: kvm guest
	I0816 22:25:46.896379  262957 out.go:177] * [newest-cni-20210816222436-6487] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:25:46.897973  262957 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:25:46.896522  262957 notify.go:169] Checking for updates...
	I0816 22:25:46.899468  262957 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:25:46.900988  262957 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:25:46.902492  262957 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:25:46.902900  262957 config.go:177] Loaded profile config "newest-cni-20210816222436-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0816 22:25:46.903274  262957 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:25:46.950656  262957 docker.go:132] docker version: linux-19.03.15
	I0816 22:25:46.950732  262957 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:25:47.034524  262957 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-16 22:25:46.986320519 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 22:25:47.034654  262957 docker.go:244] overlay module found
	I0816 22:25:47.037282  262957 out.go:177] * Using the docker driver based on existing profile
	I0816 22:25:47.037307  262957 start.go:278] selected driver: docker
	I0816 22:25:47.037313  262957 start.go:751] validating driver "docker" against &{Name:newest-cni-20210816222436-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816222436-6487 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true
extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:25:47.037417  262957 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0816 22:25:47.037459  262957 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:25:47.037480  262957 out.go:242] ! Your cgroup does not allow setting memory.
	I0816 22:25:47.039083  262957 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:25:47.040150  262957 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:25:47.119162  262957 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-16 22:25:47.075605257 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0816 22:25:47.119274  262957 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:25:47.119298  262957 out.go:242] ! Your cgroup does not allow setting memory.
	I0816 22:25:47.121212  262957 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:25:47.121330  262957 start_flags.go:716] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0816 22:25:47.121355  262957 cni.go:93] Creating CNI manager for ""
	I0816 22:25:47.121364  262957 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:25:47.121376  262957 start_flags.go:277] config:
	{Name:newest-cni-20210816222436-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816222436-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m
0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:25:47.123081  262957 out.go:177] * Starting control plane node newest-cni-20210816222436-6487 in cluster newest-cni-20210816222436-6487
	I0816 22:25:47.123113  262957 cache.go:117] Beginning downloading kic base image for docker with crio
	I0816 22:25:47.124788  262957 out.go:177] * Pulling base image ...
	I0816 22:25:47.124814  262957 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0816 22:25:47.124838  262957 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0816 22:25:47.124853  262957 cache.go:56] Caching tarball of preloaded images
	I0816 22:25:47.124910  262957 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0816 22:25:47.125039  262957 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 22:25:47.125058  262957 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on crio
	I0816 22:25:47.125170  262957 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/config.json ...
	I0816 22:25:47.212531  262957 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0816 22:25:47.212557  262957 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0816 22:25:47.212577  262957 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:25:47.212610  262957 start.go:313] acquiring machines lock for newest-cni-20210816222436-6487: {Name:mkd90dd1df90e2f23e61f524a3ae6e1a65dd1b39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:47.212710  262957 start.go:317] acquired machines lock for "newest-cni-20210816222436-6487" in 80.626µs
	I0816 22:25:47.212739  262957 start.go:93] Skipping create...Using existing machine configuration
	I0816 22:25:47.212748  262957 fix.go:55] fixHost starting: 
	I0816 22:25:47.212988  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:25:47.251771  262957 fix.go:108] recreateIfNeeded on newest-cni-20210816222436-6487: state=Stopped err=<nil>
	W0816 22:25:47.251798  262957 fix.go:134] unexpected machine state, will restart: <nil>
	I0816 22:25:44.995113  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:46.995369  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:47.650229  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:49.650872  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:47.254057  262957 out.go:177] * Restarting existing docker container for "newest-cni-20210816222436-6487" ...
	I0816 22:25:47.254120  262957 cli_runner.go:115] Run: docker start newest-cni-20210816222436-6487
	I0816 22:25:48.586029  262957 cli_runner.go:168] Completed: docker start newest-cni-20210816222436-6487: (1.33187871s)
	I0816 22:25:48.586111  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:25:48.626755  262957 kic.go:420] container "newest-cni-20210816222436-6487" state is running.
	I0816 22:25:48.627256  262957 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210816222436-6487
	I0816 22:25:48.670009  262957 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/config.json ...
	I0816 22:25:48.670233  262957 machine.go:88] provisioning docker machine ...
	I0816 22:25:48.670255  262957 ubuntu.go:169] provisioning hostname "newest-cni-20210816222436-6487"
	I0816 22:25:48.670309  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:48.711043  262957 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:48.711197  262957 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32969 <nil> <nil>}
	I0816 22:25:48.711217  262957 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20210816222436-6487 && echo "newest-cni-20210816222436-6487" | sudo tee /etc/hostname
	I0816 22:25:48.711815  262957 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48490->127.0.0.1:32969: read: connection reset by peer
	I0816 22:25:49.495358  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:51.496029  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:53.497145  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:51.907195  262957 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20210816222436-6487
	
	I0816 22:25:51.907262  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:51.946396  262957 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:51.946596  262957 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32969 <nil> <nil>}
	I0816 22:25:51.946627  262957 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20210816222436-6487' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20210816222436-6487/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20210816222436-6487' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:25:52.071168  262957 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:25:52.071196  262957 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem ServerCertRemotePath:/e
tc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:25:52.071223  262957 ubuntu.go:177] setting up certificates
	I0816 22:25:52.071234  262957 provision.go:83] configureAuth start
	I0816 22:25:52.071275  262957 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210816222436-6487
	I0816 22:25:52.110550  262957 provision.go:138] copyHostCerts
	I0816 22:25:52.110621  262957 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:25:52.110633  262957 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:25:52.110696  262957 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:25:52.110798  262957 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:25:52.110811  262957 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:25:52.110835  262957 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:25:52.110969  262957 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:25:52.110981  262957 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:25:52.111006  262957 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0816 22:25:52.111059  262957 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20210816222436-6487 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20210816222436-6487]
	I0816 22:25:52.355600  262957 provision.go:172] copyRemoteCerts
	I0816 22:25:52.355664  262957 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:25:52.355720  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:52.396113  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:25:52.486667  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:25:52.503265  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0816 22:25:52.518138  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 22:25:52.533106  262957 provision.go:86] duration metric: configureAuth took 461.862959ms
	I0816 22:25:52.533124  262957 ubuntu.go:193] setting minikube options for container-runtime
	I0816 22:25:52.533292  262957 config.go:177] Loaded profile config "newest-cni-20210816222436-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0816 22:25:52.533391  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:52.573329  262957 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:52.573496  262957 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32969 <nil> <nil>}
	I0816 22:25:52.573517  262957 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 22:25:52.991954  262957 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 22:25:52.991986  262957 machine.go:91] provisioned docker machine in 4.321739549s
	I0816 22:25:52.991996  262957 start.go:267] post-start starting for "newest-cni-20210816222436-6487" (driver="docker")
	I0816 22:25:52.992007  262957 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:25:52.992069  262957 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:25:52.992113  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:53.032158  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:25:53.123013  262957 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:25:53.125495  262957 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0816 22:25:53.125515  262957 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0816 22:25:53.125523  262957 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0816 22:25:53.125528  262957 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0816 22:25:53.125536  262957 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:25:53.125574  262957 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:25:53.125646  262957 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem -> 64872.pem in /etc/ssl/certs
	I0816 22:25:53.125746  262957 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:25:53.131911  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:25:53.149155  262957 start.go:270] post-start completed in 157.141514ms
	I0816 22:25:53.149220  262957 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 22:25:53.149270  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:53.190433  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:25:53.275867  262957 fix.go:57] fixHost completed within 6.063112205s
	I0816 22:25:53.275893  262957 start.go:80] releasing machines lock for "newest-cni-20210816222436-6487", held for 6.063163627s
	I0816 22:25:53.275995  262957 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210816222436-6487
	I0816 22:25:53.317483  262957 ssh_runner.go:149] Run: systemctl --version
	I0816 22:25:53.317538  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:53.317562  262957 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:25:53.317640  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:53.361517  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:25:53.362854  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:25:53.489151  262957 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0816 22:25:53.499402  262957 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0816 22:25:53.507663  262957 docker.go:153] disabling docker service ...
	I0816 22:25:53.507710  262957 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:25:53.515840  262957 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:25:53.523795  262957 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:25:53.582896  262957 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:25:53.644285  262957 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:25:53.653611  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:25:53.665218  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0816 22:25:53.672674  262957 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0816 22:25:53.672699  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0816 22:25:53.680934  262957 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:25:53.686723  262957 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:25:53.686773  262957 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:25:53.693222  262957 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:25:53.698990  262957 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:25:53.756392  262957 ssh_runner.go:149] Run: sudo systemctl start crio
	I0816 22:25:53.765202  262957 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 22:25:53.765252  262957 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:25:53.768154  262957 start.go:413] Will wait 60s for crictl version
	I0816 22:25:53.768197  262957 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:25:53.794195  262957 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0816 22:25:53.794262  262957 ssh_runner.go:149] Run: crio --version
	I0816 22:25:53.852537  262957 ssh_runner.go:149] Run: crio --version
	I0816 22:25:53.912084  262957 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on CRI-O 1.20.3 ...
	I0816 22:25:53.912160  262957 cli_runner.go:115] Run: docker network inspect newest-cni-20210816222436-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 22:25:53.950048  262957 ssh_runner.go:149] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0816 22:25:53.953262  262957 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:25:53.963781  262957 out.go:177]   - kubelet.network-plugin=cni
	I0816 22:25:52.154162  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:54.650414  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:53.965324  262957 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0816 22:25:53.965406  262957 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0816 22:25:53.965459  262957 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:25:53.993612  262957 crio.go:424] all images are preloaded for cri-o runtime.
	I0816 22:25:53.993630  262957 crio.go:333] Images already preloaded, skipping extraction
	I0816 22:25:53.993667  262957 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:25:54.020097  262957 crio.go:424] all images are preloaded for cri-o runtime.
	I0816 22:25:54.020118  262957 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:25:54.020180  262957 ssh_runner.go:149] Run: crio config
	I0816 22:25:54.082979  262957 cni.go:93] Creating CNI manager for ""
	I0816 22:25:54.083003  262957 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:25:54.083013  262957 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0816 22:25:54.083024  262957 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20210816222436-6487 NodeName:newest-cni-20210816222436-6487 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:fa
lse] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:25:54.083168  262957 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "newest-cni-20210816222436-6487"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:25:54.083284  262957 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20210816222436-6487 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816222436-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0816 22:25:54.083346  262957 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0816 22:25:54.090012  262957 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:25:54.090068  262957 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:25:54.096369  262957 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (603 bytes)
	I0816 22:25:54.107861  262957 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0816 22:25:54.119303  262957 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I0816 22:25:54.130633  262957 ssh_runner.go:149] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0816 22:25:54.133217  262957 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:25:54.141396  262957 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487 for IP: 192.168.67.2
	I0816 22:25:54.141447  262957 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:25:54.141471  262957 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:25:54.141535  262957 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/client.key
	I0816 22:25:54.141563  262957 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/apiserver.key.c7fa3a9e
	I0816 22:25:54.141596  262957 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/proxy-client.key
	I0816 22:25:54.141717  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem (1338 bytes)
	W0816 22:25:54.141762  262957 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487_empty.pem, impossibly tiny 0 bytes
	I0816 22:25:54.141774  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 22:25:54.141803  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:25:54.141827  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:25:54.141848  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0816 22:25:54.141897  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:25:54.142744  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:25:54.158540  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 22:25:54.174181  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:25:54.190076  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 22:25:54.205410  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:25:54.220130  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 22:25:54.235298  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:25:54.251605  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 22:25:54.268123  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /usr/share/ca-certificates/64872.pem (1708 bytes)
	I0816 22:25:54.283499  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:25:54.298583  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem --> /usr/share/ca-certificates/6487.pem (1338 bytes)
	I0816 22:25:54.314024  262957 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:25:54.325021  262957 ssh_runner.go:149] Run: openssl version
	I0816 22:25:54.329401  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/64872.pem && ln -fs /usr/share/ca-certificates/64872.pem /etc/ssl/certs/64872.pem"
	I0816 22:25:54.335940  262957 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/64872.pem
	I0816 22:25:54.338596  262957 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:50 /usr/share/ca-certificates/64872.pem
	I0816 22:25:54.338638  262957 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/64872.pem
	I0816 22:25:54.342906  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/64872.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:25:54.348826  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:25:54.358858  262957 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:25:54.361641  262957 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:25:54.361673  262957 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:25:54.365977  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:25:54.372154  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6487.pem && ln -fs /usr/share/ca-certificates/6487.pem /etc/ssl/certs/6487.pem"
	I0816 22:25:54.378858  262957 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6487.pem
	I0816 22:25:54.381576  262957 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:50 /usr/share/ca-certificates/6487.pem
	I0816 22:25:54.381623  262957 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6487.pem
	I0816 22:25:54.386036  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6487.pem /etc/ssl/certs/51391683.0"
	I0816 22:25:54.391898  262957 kubeadm.go:390] StartCluster: {Name:newest-cni-20210816222436-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816222436-6487 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:
false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:25:54.392022  262957 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 22:25:54.392052  262957 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:25:54.414245  262957 cri.go:76] found id: ""
	I0816 22:25:54.414284  262957 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:25:54.420413  262957 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0816 22:25:54.420436  262957 kubeadm.go:600] restartCluster start
	I0816 22:25:54.420466  262957 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0816 22:25:54.426072  262957 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:54.426966  262957 kubeconfig.go:117] verify returned: extract IP: "newest-cni-20210816222436-6487" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:25:54.427382  262957 kubeconfig.go:128] "newest-cni-20210816222436-6487" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0816 22:25:54.428106  262957 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:25:54.430425  262957 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 22:25:54.436260  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:54.436301  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:54.447743  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:54.648124  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:54.648202  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:54.661570  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:54.848823  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:54.848884  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:54.862082  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.048130  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:55.048196  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:55.061645  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.247861  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:55.247956  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:55.262026  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.448347  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:55.448414  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:55.461467  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.648695  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:55.648774  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:55.661684  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.847947  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:55.848042  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:55.862542  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:56.048736  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:56.048800  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:56.061836  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:56.248110  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:56.248200  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:56.261360  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:56.448639  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:56.448705  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:56.461500  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:56.648623  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:56.648703  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:56.662181  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.995370  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:58.495829  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:56.651402  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:58.651440  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:56.848603  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:56.848665  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:56.861212  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.048524  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:57.048591  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:57.061580  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.248828  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:57.248911  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:57.261828  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.448121  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:57.448188  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:57.461171  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.461189  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:57.461225  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:57.512239  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.512268  262957 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0816 22:25:57.512276  262957 kubeadm.go:1032] stopping kube-system containers ...
	I0816 22:25:57.512288  262957 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 22:25:57.512336  262957 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:25:57.536298  262957 cri.go:76] found id: ""
	I0816 22:25:57.536370  262957 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0816 22:25:57.545155  262957 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:25:57.551792  262957 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5643 Aug 16 22:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug 16 22:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Aug 16 22:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 16 22:24 /etc/kubernetes/scheduler.conf
	
	I0816 22:25:57.551856  262957 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 22:25:57.558184  262957 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 22:25:57.564274  262957 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 22:25:57.570245  262957 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.570290  262957 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 22:25:57.576131  262957 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 22:25:57.582547  262957 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.582595  262957 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 22:25:57.588511  262957 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:25:57.594494  262957 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0816 22:25:57.594510  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:57.636811  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:58.457317  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:58.574142  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:58.631732  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:58.680441  262957 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:25:58.680500  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:59.195406  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:59.695787  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:00.195739  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:00.694833  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:01.194883  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:01.695030  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:00.496189  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:02.994975  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:01.151777  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:03.650774  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:02.195405  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:02.695613  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:03.195523  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:03.695735  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:04.195172  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:04.695313  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:05.194844  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:05.228254  262957 api_server.go:70] duration metric: took 6.54781428s to wait for apiserver process to appear ...
	I0816 22:26:05.228278  262957 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:26:05.228288  262957 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:26:04.995640  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:06.995858  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:08.521501  262957 api_server.go:265] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:26:08.521534  262957 api_server.go:101] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:26:09.022198  262957 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:26:09.028317  262957 api_server.go:265] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:26:09.028345  262957 api_server.go:101] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:26:09.521603  262957 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:26:09.526189  262957 api_server.go:265] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:26:09.526218  262957 api_server.go:101] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:26:10.021661  262957 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:26:10.027811  262957 api_server.go:265] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0816 22:26:10.035180  262957 api_server.go:139] control plane version: v1.22.0-rc.0
	I0816 22:26:10.035212  262957 api_server.go:129] duration metric: took 4.806927084s to wait for apiserver health ...
	I0816 22:26:10.035225  262957 cni.go:93] Creating CNI manager for ""
	I0816 22:26:10.035233  262957 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:26:09.964461  238595 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (30.898528802s)
	I0816 22:26:09.964528  238595 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:26:09.973997  238595 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 22:26:09.974062  238595 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:26:09.999861  238595 cri.go:76] found id: ""
	I0816 22:26:09.999951  238595 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:26:10.007018  238595 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0816 22:26:10.007067  238595 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:26:10.013415  238595 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:26:10.013459  238595 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0816 22:26:05.657515  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:08.152187  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:10.115193  262957 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 22:26:10.115266  262957 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0816 22:26:10.120908  262957 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl ...
	I0816 22:26:10.120933  262957 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0816 22:26:10.134935  262957 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 22:26:10.353050  262957 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:26:10.366285  262957 system_pods.go:59] 10 kube-system pods found
	I0816 22:26:10.366331  262957 system_pods.go:61] "coredns-78fcd69978-nqx44" [6fe4486f-609a-4711-8984-d211fafbc14a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:26:10.366343  262957 system_pods.go:61] "coredns-78fcd69978-sh8hf" [99ca4da4-63c0-4eb5-b1a9-824580994bf0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:26:10.366357  262957 system_pods.go:61] "etcd-newest-cni-20210816222436-6487" [3161f296-8eb0-48ae-8bbd-1a3104e0c5cc] Running
	I0816 22:26:10.366369  262957 system_pods.go:61] "kindnet-4wtm6" [f784c344-70ae-41f8-b749-4bd3d26179d1] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0816 22:26:10.366379  262957 system_pods.go:61] "kube-apiserver-newest-cni-20210816222436-6487" [b48447a9-bba3-4ef7-89ab-ee9a020ac10b] Running
	I0816 22:26:10.366393  262957 system_pods.go:61] "kube-controller-manager-newest-cni-20210816222436-6487" [0d5a549e-612d-463d-9383-0ee0d9dd2a5c] Running
	I0816 22:26:10.366402  262957 system_pods.go:61] "kube-proxy-242br" [91a06e4b-7a8f-4f7c-a698-3f40c4024f1f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 22:26:10.366411  262957 system_pods.go:61] "kube-scheduler-newest-cni-20210816222436-6487" [c46e8919-9bc3-4dbc-ba13-c0d0b8c2ee7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 22:26:10.366419  262957 system_pods.go:61] "metrics-server-7c784ccb57-j52xp" [8b98c23d-9fa2-44dd-b9af-b1bf3215cd88] Pending
	I0816 22:26:10.366427  262957 system_pods.go:61] "storage-provisioner" [a71ed147-1a32-4360-9bcc-722db25ff42e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:26:10.366434  262957 system_pods.go:74] duration metric: took 13.36244ms to wait for pod list to return data ...
	I0816 22:26:10.366443  262957 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:26:10.369938  262957 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:26:10.369965  262957 node_conditions.go:123] node cpu capacity is 8
	I0816 22:26:10.369980  262957 node_conditions.go:105] duration metric: took 3.531866ms to run NodePressure ...
	I0816 22:26:10.370000  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:26:10.628407  262957 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:26:10.646464  262957 ops.go:34] apiserver oom_adj: -16
	I0816 22:26:10.646488  262957 kubeadm.go:604] restartCluster took 16.226044614s
	I0816 22:26:10.646497  262957 kubeadm.go:392] StartCluster complete in 16.254606324s
	I0816 22:26:10.646519  262957 settings.go:142] acquiring lock: {Name:mk71c9e00e0a208f5191d6b85d29a074b46503a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:26:10.646648  262957 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:26:10.648250  262957 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:26:10.653233  262957 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20210816222436-6487" rescaled to 1
	I0816 22:26:10.653302  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:26:10.653319  262957 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:26:10.653298  262957 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0816 22:26:10.653366  262957 addons.go:59] Setting storage-provisioner=true in profile "newest-cni-20210816222436-6487"
	I0816 22:26:10.653381  262957 addons.go:135] Setting addon storage-provisioner=true in "newest-cni-20210816222436-6487"
	W0816 22:26:10.653387  262957 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:26:10.653413  262957 host.go:66] Checking if "newest-cni-20210816222436-6487" exists ...
	I0816 22:26:10.653421  262957 addons.go:59] Setting default-storageclass=true in profile "newest-cni-20210816222436-6487"
	I0816 22:26:10.653452  262957 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20210816222436-6487"
	I0816 22:26:10.653502  262957 config.go:177] Loaded profile config "newest-cni-20210816222436-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0816 22:26:10.653557  262957 addons.go:59] Setting metrics-server=true in profile "newest-cni-20210816222436-6487"
	I0816 22:26:10.653574  262957 addons.go:135] Setting addon metrics-server=true in "newest-cni-20210816222436-6487"
	W0816 22:26:10.653581  262957 addons.go:147] addon metrics-server should already be in state true
	I0816 22:26:10.653607  262957 host.go:66] Checking if "newest-cni-20210816222436-6487" exists ...
	I0816 22:26:10.653741  262957 addons.go:59] Setting dashboard=true in profile "newest-cni-20210816222436-6487"
	I0816 22:26:10.653767  262957 addons.go:135] Setting addon dashboard=true in "newest-cni-20210816222436-6487"
	W0816 22:26:10.653776  262957 addons.go:147] addon dashboard should already be in state true
	I0816 22:26:10.653788  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:10.653811  262957 host.go:66] Checking if "newest-cni-20210816222436-6487" exists ...
	I0816 22:26:10.653952  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:10.654110  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:10.655941  262957 out.go:177] * Verifying Kubernetes components...
	I0816 22:26:10.654275  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:10.656048  262957 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:26:10.718128  262957 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:26:10.718268  262957 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:26:10.718285  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:26:10.718346  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:26:10.724823  262957 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:26:10.728589  262957 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:26:10.728694  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:26:10.728708  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:26:10.728778  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:26:10.372762  238595 out.go:204]   - Generating certificates and keys ...
	I0816 22:26:10.735593  262957 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:26:10.735667  262957 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:26:10.735676  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:26:10.735731  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:26:10.736243  262957 addons.go:135] Setting addon default-storageclass=true in "newest-cni-20210816222436-6487"
	W0816 22:26:10.736275  262957 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:26:10.736305  262957 host.go:66] Checking if "newest-cni-20210816222436-6487" exists ...
	I0816 22:26:10.736853  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:10.773177  262957 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:26:10.773242  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:10.774623  262957 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0816 22:26:10.789622  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:26:10.795332  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:26:10.807716  262957 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:26:10.807742  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:26:10.807797  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:26:10.818897  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:26:10.828432  262957 api_server.go:70] duration metric: took 175.046767ms to wait for apiserver process to appear ...
	I0816 22:26:10.828463  262957 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:26:10.828475  262957 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:26:10.835641  262957 api_server.go:265] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0816 22:26:10.836517  262957 api_server.go:139] control plane version: v1.22.0-rc.0
	I0816 22:26:10.836536  262957 api_server.go:129] duration metric: took 8.066334ms to wait for apiserver health ...
	I0816 22:26:10.836544  262957 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:26:10.844801  262957 system_pods.go:59] 10 kube-system pods found
	I0816 22:26:10.844830  262957 system_pods.go:61] "coredns-78fcd69978-nqx44" [6fe4486f-609a-4711-8984-d211fafbc14a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:26:10.844841  262957 system_pods.go:61] "coredns-78fcd69978-sh8hf" [99ca4da4-63c0-4eb5-b1a9-824580994bf0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:26:10.844849  262957 system_pods.go:61] "etcd-newest-cni-20210816222436-6487" [3161f296-8eb0-48ae-8bbd-1a3104e0c5cc] Running
	I0816 22:26:10.844862  262957 system_pods.go:61] "kindnet-4wtm6" [f784c344-70ae-41f8-b749-4bd3d26179d1] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0816 22:26:10.844871  262957 system_pods.go:61] "kube-apiserver-newest-cni-20210816222436-6487" [b48447a9-bba3-4ef7-89ab-ee9a020ac10b] Running
	I0816 22:26:10.844881  262957 system_pods.go:61] "kube-controller-manager-newest-cni-20210816222436-6487" [0d5a549e-612d-463d-9383-0ee0d9dd2a5c] Running
	I0816 22:26:10.844892  262957 system_pods.go:61] "kube-proxy-242br" [91a06e4b-7a8f-4f7c-a698-3f40c4024f1f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 22:26:10.844903  262957 system_pods.go:61] "kube-scheduler-newest-cni-20210816222436-6487" [c46e8919-9bc3-4dbc-ba13-c0d0b8c2ee7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 22:26:10.844920  262957 system_pods.go:61] "metrics-server-7c784ccb57-j52xp" [8b98c23d-9fa2-44dd-b9af-b1bf3215cd88] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:26:10.844930  262957 system_pods.go:61] "storage-provisioner" [a71ed147-1a32-4360-9bcc-722db25ff42e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:26:10.844937  262957 system_pods.go:74] duration metric: took 8.387271ms to wait for pod list to return data ...
	I0816 22:26:10.844948  262957 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:26:10.847353  262957 default_sa.go:45] found service account: "default"
	I0816 22:26:10.847370  262957 default_sa.go:55] duration metric: took 2.413533ms for default service account to be created ...
	I0816 22:26:10.847380  262957 kubeadm.go:547] duration metric: took 194.000457ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0816 22:26:10.847401  262957 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:26:10.849463  262957 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:26:10.849480  262957 node_conditions.go:123] node cpu capacity is 8
	I0816 22:26:10.849495  262957 node_conditions.go:105] duration metric: took 2.085396ms to run NodePressure ...
	I0816 22:26:10.849509  262957 start.go:231] waiting for startup goroutines ...
	I0816 22:26:10.862435  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:26:10.919082  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:26:10.919107  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:26:10.928768  262957 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:26:10.928790  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:26:10.936226  262957 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:26:10.939559  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:26:10.939580  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:26:10.947378  262957 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:26:10.947440  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:26:10.956321  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:26:10.956344  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:26:10.959897  262957 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:26:11.016118  262957 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:26:11.016141  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:26:11.021575  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:26:11.021599  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:26:11.031871  262957 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:26:11.038943  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:26:11.038964  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:26:11.137497  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:26:11.137523  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:26:11.217513  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:26:11.217538  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:26:11.232958  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:26:11.232983  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:26:11.248587  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:26:11.248612  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:26:11.327831  262957 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:26:11.543579  262957 addons.go:313] Verifying addon metrics-server=true in "newest-cni-20210816222436-6487"
	I0816 22:26:11.719432  262957 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0816 22:26:11.719457  262957 addons.go:344] enableAddons completed in 1.066141103s
	I0816 22:26:11.764284  262957 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0816 22:26:11.765766  262957 out.go:177] 
	W0816 22:26:11.765889  262957 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0816 22:26:11.767364  262957 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0816 22:26:11.768745  262957 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210816222436-6487" cluster and "default" namespace by default
	I0816 22:26:10.885563  238595 out.go:204]   - Booting up control plane ...
	I0816 22:26:09.496573  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:11.995982  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:10.651451  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:13.151979  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:15.153271  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:13.996111  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:16.495078  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:18.495570  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:17.651242  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:20.151935  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:20.496216  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:22.996199  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:24.933950  238595 out.go:204]   - Configuring RBAC rules ...
	I0816 22:26:25.352976  238595 cni.go:93] Creating CNI manager for ""
	I0816 22:26:25.353002  238595 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:26:22.650222  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:23.146708  240293 pod_ready.go:81] duration metric: took 4m0.400635585s waiting for pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace to be "Ready" ...
	E0816 22:26:23.146730  240293 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 22:26:23.146749  240293 pod_ready.go:38] duration metric: took 4m42.319875628s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:26:23.146776  240293 kubeadm.go:604] restartCluster took 4m59.914882197s
	W0816 22:26:23.146936  240293 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0816 22:26:23.146993  240293 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 22:26:25.355246  238595 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 22:26:25.355311  238595 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0816 22:26:25.358718  238595 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0816 22:26:25.358738  238595 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0816 22:26:25.370945  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 22:26:25.621157  238595 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:26:25.621206  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:25.621226  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=default-k8s-different-port-20210816221939-6487 minikube.k8s.io/updated_at=2021_08_16T22_26_25_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:25.733924  238595 ops.go:34] apiserver oom_adj: -16
	I0816 22:26:25.733912  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:26.298743  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:26.798723  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:27.298752  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:24.996387  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:27.495135  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:27.798667  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:28.298823  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:28.798898  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:29.299125  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:29.798939  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:30.298461  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:30.799163  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:31.298377  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:31.798518  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:32.299080  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:29.495517  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:31.495703  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:33.496362  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:32.798224  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:33.298433  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:33.799075  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:34.298503  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:34.798223  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:35.299182  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:35.798578  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:36.298228  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:36.798801  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:37.299144  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:35.996187  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:38.495700  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:37.798260  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:38.298197  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:38.798424  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:38.917845  238595 kubeadm.go:985] duration metric: took 13.296684424s to wait for elevateKubeSystemPrivileges.
	I0816 22:26:38.917877  238595 kubeadm.go:392] StartCluster complete in 5m29.078278154s
	I0816 22:26:38.917895  238595 settings.go:142] acquiring lock: {Name:mk71c9e00e0a208f5191d6b85d29a074b46503a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:26:38.917976  238595 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:26:38.919347  238595 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:26:39.435280  238595 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20210816221939-6487" rescaled to 1
	I0816 22:26:39.435337  238595 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 22:26:39.436884  238595 out.go:177] * Verifying Kubernetes components...
	I0816 22:26:39.435381  238595 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:26:39.436944  238595 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:26:39.435407  238595 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:26:39.437054  238595 addons.go:59] Setting storage-provisioner=true in profile "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.437066  238595 addons.go:59] Setting dashboard=true in profile "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.437084  238595 addons.go:59] Setting default-storageclass=true in profile "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.437097  238595 addons.go:135] Setting addon dashboard=true in "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.437107  238595 addons.go:59] Setting metrics-server=true in profile "default-k8s-different-port-20210816221939-6487"
	W0816 22:26:39.437111  238595 addons.go:147] addon dashboard should already be in state true
	I0816 22:26:39.437119  238595 addons.go:135] Setting addon metrics-server=true in "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.435601  238595 config.go:177] Loaded profile config "default-k8s-different-port-20210816221939-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	W0816 22:26:39.437127  238595 addons.go:147] addon metrics-server should already be in state true
	I0816 22:26:39.437075  238595 addons.go:135] Setting addon storage-provisioner=true in "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.437147  238595 host.go:66] Checking if "default-k8s-different-port-20210816221939-6487" exists ...
	I0816 22:26:39.437156  238595 host.go:66] Checking if "default-k8s-different-port-20210816221939-6487" exists ...
	W0816 22:26:39.437157  238595 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:26:39.437098  238595 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.437219  238595 host.go:66] Checking if "default-k8s-different-port-20210816221939-6487" exists ...
	I0816 22:26:39.437580  238595 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210816221939-6487 --format={{.State.Status}}
	I0816 22:26:39.437673  238595 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210816221939-6487 --format={{.State.Status}}
	I0816 22:26:39.437680  238595 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210816221939-6487 --format={{.State.Status}}
	I0816 22:26:39.437786  238595 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210816221939-6487 --format={{.State.Status}}
	I0816 22:26:39.450925  238595 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20210816221939-6487" to be "Ready" ...
	I0816 22:26:39.454454  238595 node_ready.go:49] node "default-k8s-different-port-20210816221939-6487" has status "Ready":"True"
	I0816 22:26:39.454478  238595 node_ready.go:38] duration metric: took 3.504801ms waiting for node "default-k8s-different-port-20210816221939-6487" to be "Ready" ...
	I0816 22:26:39.454492  238595 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:26:39.461585  238595 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-n6ddn" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:39.496014  238595 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:26:39.496143  238595 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:26:39.496159  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:26:39.497741  238595 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:26:39.496222  238595 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210816221939-6487
	I0816 22:26:39.497808  238595 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:26:39.497821  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:26:39.497865  238595 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210816221939-6487
	I0816 22:26:39.499561  238595 addons.go:135] Setting addon default-storageclass=true in "default-k8s-different-port-20210816221939-6487"
	W0816 22:26:39.499598  238595 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:26:39.499623  238595 host.go:66] Checking if "default-k8s-different-port-20210816221939-6487" exists ...
	I0816 22:26:39.500057  238595 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210816221939-6487 --format={{.State.Status}}
	I0816 22:26:39.508968  238595 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:26:39.510786  238595 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:26:39.510877  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:26:39.510894  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:26:39.510963  238595 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210816221939-6487
	I0816 22:26:39.543137  238595 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:26:39.551327  238595 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:26:39.551354  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:26:39.551418  238595 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210816221939-6487
	I0816 22:26:39.562469  238595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816221939-6487/id_rsa Username:docker}
	I0816 22:26:39.567015  238595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816221939-6487/id_rsa Username:docker}
	I0816 22:26:39.585895  238595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816221939-6487/id_rsa Username:docker}
	I0816 22:26:39.601932  238595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816221939-6487/id_rsa Username:docker}
	I0816 22:26:39.730192  238595 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:26:39.730216  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:26:39.735004  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:26:39.735028  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:26:39.825712  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:26:39.825735  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:26:39.828025  238595 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:26:39.828046  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:26:39.829939  238595 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:26:39.830581  238595 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:26:39.917562  238595 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:26:39.917594  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:26:39.918416  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:26:39.918442  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:26:39.934239  238595 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:26:39.935303  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:26:39.935323  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:26:40.024142  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:26:40.024168  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:26:40.121870  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:26:40.121954  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:26:40.213303  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:26:40.213329  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:26:40.226600  238595 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0816 22:26:40.233649  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:26:40.233674  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:26:40.315993  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:26:40.316021  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:26:40.329860  238595 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:26:40.913110  238595 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.08249574s)
	I0816 22:26:41.119373  238595 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.185088873s)
	I0816 22:26:41.119413  238595 addons.go:313] Verifying addon metrics-server=true in "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:41.513353  238595 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.183438758s)
	I0816 22:26:41.515520  238595 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0816 22:26:41.515560  238595 addons.go:344] enableAddons completed in 2.080164328s
	I0816 22:26:41.516293  238595 pod_ready.go:102] pod "coredns-558bd4d5db-n6ddn" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:40.996044  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:42.996463  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:43.970224  238595 pod_ready.go:102] pod "coredns-558bd4d5db-n6ddn" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:45.016130  238595 pod_ready.go:92] pod "coredns-558bd4d5db-n6ddn" in "kube-system" namespace has status "Ready":"True"
	I0816 22:26:45.016153  238595 pod_ready.go:81] duration metric: took 5.554536838s waiting for pod "coredns-558bd4d5db-n6ddn" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.016169  238595 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.020503  238595 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:26:45.020523  238595 pod_ready.go:81] duration metric: took 4.344641ms waiting for pod "etcd-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.020537  238595 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.024738  238595 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:26:45.024753  238595 pod_ready.go:81] duration metric: took 4.208942ms waiting for pod "kube-apiserver-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.024762  238595 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.028646  238595 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:26:45.028661  238595 pod_ready.go:81] duration metric: took 3.89128ms waiting for pod "kube-controller-manager-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.028670  238595 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4pmgn" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.032791  238595 pod_ready.go:92] pod "kube-proxy-4pmgn" in "kube-system" namespace has status "Ready":"True"
	I0816 22:26:45.032812  238595 pod_ready.go:81] duration metric: took 4.13529ms waiting for pod "kube-proxy-4pmgn" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.032823  238595 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.369533  238595 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:26:45.369559  238595 pod_ready.go:81] duration metric: took 336.726404ms waiting for pod "kube-scheduler-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.369571  238595 pod_ready.go:38] duration metric: took 5.915063438s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:26:45.369595  238595 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:26:45.369645  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:45.395595  238595 api_server.go:70] duration metric: took 5.960222514s to wait for apiserver process to appear ...
	I0816 22:26:45.395625  238595 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:26:45.395637  238595 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0816 22:26:45.400217  238595 api_server.go:265] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0816 22:26:45.401067  238595 api_server.go:139] control plane version: v1.21.3
	I0816 22:26:45.401089  238595 api_server.go:129] duration metric: took 5.457124ms to wait for apiserver health ...
	I0816 22:26:45.401099  238595 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:26:45.570973  238595 system_pods.go:59] 9 kube-system pods found
	I0816 22:26:45.571001  238595 system_pods.go:61] "coredns-558bd4d5db-n6ddn" [6ea194b9-97b1-4f8b-b10a-e53c58d0b815] Running
	I0816 22:26:45.571006  238595 system_pods.go:61] "etcd-default-k8s-different-port-20210816221939-6487" [c8c2a683-c482-47ca-888f-c223fa5fc1a2] Running
	I0816 22:26:45.571016  238595 system_pods.go:61] "kindnet-5x8jh" [3ef671f3-edb1-4112-8340-37f83bc48660] Running
	I0816 22:26:45.571020  238595 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210816221939-6487" [25fe7b38-f39b-4718-956a-b47bd970723a] Running
	I0816 22:26:45.571025  238595 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210816221939-6487" [24ad0f39-a711-48df-9b8e-d9e0e08ee4f6] Running
	I0816 22:26:45.571028  238595 system_pods.go:61] "kube-proxy-4pmgn" [e049b17c-98b6-4692-8e5a-61c73abc97c2] Running
	I0816 22:26:45.571032  238595 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210816221939-6487" [2c0db59b-68b7-4831-be15-4f882953eec5] Running
	I0816 22:26:45.571039  238595 system_pods.go:61] "metrics-server-7c784ccb57-lfkmq" [d9309c70-8cf5-4fdc-a79f-1c85f9ceda55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:26:45.571069  238595 system_pods.go:61] "storage-provisioner" [f876c73e-e1dd-4f3c-9bad-866adba8a427] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:26:45.571074  238595 system_pods.go:74] duration metric: took 169.970426ms to wait for pod list to return data ...
	I0816 22:26:45.571085  238595 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:26:45.768620  238595 default_sa.go:45] found service account: "default"
	I0816 22:26:45.768644  238595 default_sa.go:55] duration metric: took 197.553773ms for default service account to be created ...
	I0816 22:26:45.768653  238595 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:26:45.970940  238595 system_pods.go:86] 9 kube-system pods found
	I0816 22:26:45.970973  238595 system_pods.go:89] "coredns-558bd4d5db-n6ddn" [6ea194b9-97b1-4f8b-b10a-e53c58d0b815] Running
	I0816 22:26:45.970982  238595 system_pods.go:89] "etcd-default-k8s-different-port-20210816221939-6487" [c8c2a683-c482-47ca-888f-c223fa5fc1a2] Running
	I0816 22:26:45.970987  238595 system_pods.go:89] "kindnet-5x8jh" [3ef671f3-edb1-4112-8340-37f83bc48660] Running
	I0816 22:26:45.970993  238595 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20210816221939-6487" [25fe7b38-f39b-4718-956a-b47bd970723a] Running
	I0816 22:26:45.971000  238595 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20210816221939-6487" [24ad0f39-a711-48df-9b8e-d9e0e08ee4f6] Running
	I0816 22:26:45.971006  238595 system_pods.go:89] "kube-proxy-4pmgn" [e049b17c-98b6-4692-8e5a-61c73abc97c2] Running
	I0816 22:26:45.971013  238595 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20210816221939-6487" [2c0db59b-68b7-4831-be15-4f882953eec5] Running
	I0816 22:26:45.971024  238595 system_pods.go:89] "metrics-server-7c784ccb57-lfkmq" [d9309c70-8cf5-4fdc-a79f-1c85f9ceda55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:26:45.971037  238595 system_pods.go:89] "storage-provisioner" [f876c73e-e1dd-4f3c-9bad-866adba8a427] Running
	I0816 22:26:45.971046  238595 system_pods.go:126] duration metric: took 202.387682ms to wait for k8s-apps to be running ...
	I0816 22:26:45.971061  238595 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:26:45.971104  238595 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:26:46.023089  238595 system_svc.go:56] duration metric: took 52.020591ms WaitForService to wait for kubelet.
	I0816 22:26:46.023116  238595 kubeadm.go:547] duration metric: took 6.587748491s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:26:46.023141  238595 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:26:46.168888  238595 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:26:46.168915  238595 node_conditions.go:123] node cpu capacity is 8
	I0816 22:26:46.168933  238595 node_conditions.go:105] duration metric: took 145.786239ms to run NodePressure ...
	I0816 22:26:46.168945  238595 start.go:231] waiting for startup goroutines ...
	I0816 22:26:46.211558  238595 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0816 22:26:46.214728  238595 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20210816221939-6487" cluster and "default" namespace by default
	I0816 22:26:45.495975  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:47.496653  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:49.995957  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:52.496048  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:54.204913  240293 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.057884699s)
	I0816 22:26:54.204974  240293 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:26:54.214048  240293 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 22:26:54.214110  240293 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:26:54.236967  240293 cri.go:76] found id: ""
	I0816 22:26:54.237019  240293 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:26:54.243553  240293 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0816 22:26:54.243606  240293 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:26:54.249971  240293 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:26:54.250416  240293 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0816 22:26:54.516364  240293 out.go:204]   - Generating certificates and keys ...
	I0816 22:26:55.249703  240293 out.go:204]   - Booting up control plane ...
	I0816 22:26:54.996103  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:57.495660  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:59.495743  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:01.995335  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Mon 2021-08-16 22:25:48 UTC, end at Mon 2021-08-16 22:27:07 UTC. --
	Aug 16 22:26:09 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:09.919511998Z" level=info msg="Created container 13ea21137eecc1b006615bde88ce68cb748b6081ec476b11a4f1c42824c05e82: kube-system/kindnet-4wtm6/kindnet-cni" id=ea2e3713-f8c8-4b5f-92d8-661a454f51cd name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:26:09 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:09.920164222Z" level=info msg="Starting container: 13ea21137eecc1b006615bde88ce68cb748b6081ec476b11a4f1c42824c05e82" id=33299c18-2b29-4129-8a50-c16b0ae1896a name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:26:09 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:09.920490527Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-242br/POD" id=50d91f6a-327a-48af-af01-89e51f350506 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 16 22:26:09 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:09.931078770Z" level=info msg="Started container 13ea21137eecc1b006615bde88ce68cb748b6081ec476b11a4f1c42824c05e82: kube-system/kindnet-4wtm6/kindnet-cni" id=33299c18-2b29-4129-8a50-c16b0ae1896a name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.116338754Z" level=info msg="Ran pod sandbox ff5a70388e7696343dc3c4be496d8892155d5bcdc0d096465b5316c5621bc2c3 with infra container: kube-system/kube-proxy-242br/POD" id=50d91f6a-327a-48af-af01-89e51f350506 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.117498357Z" level=info msg="Checking image status: k8s.gcr.io/kube-proxy:v1.22.0-rc.0" id=2985f9b5-b86b-45ee-9057-8f3411c06a2b name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.118268346Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,RepoTags:[k8s.gcr.io/kube-proxy:v1.22.0-rc.0],RepoDigests:[k8s.gcr.io/kube-proxy@sha256:494cce2f9dbe9f6f86c5aac1a5b9e3b696500b57a06ce17a8b2aa74c955079c8 k8s.gcr.io/kube-proxy@sha256:d7d96bcbac7bfcb2eec40f086186850c1492540b1feed855f937d68d375d7980],Size_:105449192,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2985f9b5-b86b-45ee-9057-8f3411c06a2b name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.119122662Z" level=info msg="Checking image status: k8s.gcr.io/kube-proxy:v1.22.0-rc.0" id=d758234d-e17f-47b5-a3e7-d5ff95f3c32c name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.119731699Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c,RepoTags:[k8s.gcr.io/kube-proxy:v1.22.0-rc.0],RepoDigests:[k8s.gcr.io/kube-proxy@sha256:494cce2f9dbe9f6f86c5aac1a5b9e3b696500b57a06ce17a8b2aa74c955079c8 k8s.gcr.io/kube-proxy@sha256:d7d96bcbac7bfcb2eec40f086186850c1492540b1feed855f937d68d375d7980],Size_:105449192,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=d758234d-e17f-47b5-a3e7-d5ff95f3c32c name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.120608805Z" level=info msg="Creating container: kube-system/kube-proxy-242br/kube-proxy" id=6200b506-a13e-4e28-9fae-7610d3265429 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.267861426Z" level=info msg="Created container 753269be7f7c38f2867ce6b87e47c113544230521c2c9398d83b19782ced1605: kube-system/kube-proxy-242br/kube-proxy" id=6200b506-a13e-4e28-9fae-7610d3265429 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.268449638Z" level=info msg="Starting container: 753269be7f7c38f2867ce6b87e47c113544230521c2c9398d83b19782ced1605" id=8fbdd443-6843-47b2-a02b-af9ea0be365c name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.280946153Z" level=info msg="Started container 753269be7f7c38f2867ce6b87e47c113544230521c2c9398d83b19782ced1605: kube-system/kube-proxy-242br/kube-proxy" id=8fbdd443-6843-47b2-a02b-af9ea0be365c name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.521302094Z" level=info msg="Running pod sandbox: kube-system/storage-provisioner/POD" id=c42277d0-c651-4a4c-b26d-89eb1777ae49 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.730474500Z" level=info msg="Ran pod sandbox 4162e9d2bc7a18a1a87f14b0d226fdbb28c672d30492cca60bc6434c14650de5 with infra container: kube-system/storage-provisioner/POD" id=c42277d0-c651-4a4c-b26d-89eb1777ae49 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.732461992Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c1aafcef-096c-44c1-b923-60bc0c443438 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.733195035Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c1aafcef-096c-44c1-b923-60bc0c443438 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.733990959Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4dcb58f3-656b-410e-9384-bfd3e24ba7c7 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.734612329Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=4dcb58f3-656b-410e-9384-bfd3e24ba7c7 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.735573481Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=137cd6d3-4d81-4c2f-8dea-d27a95295cd7 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.747348302Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1fb784df33c89ad75d836672da4c29d797c171f633cb1c7ab33208e134343e35/merged/etc/passwd: no such file or directory"
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.747403698Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1fb784df33c89ad75d836672da4c29d797c171f633cb1c7ab33208e134343e35/merged/etc/group: no such file or directory"
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.917509537Z" level=info msg="Created container f53cda0af472e0179ac68516d7550b20c474108c95d22e22223b7dfa9eda7a15: kube-system/storage-provisioner/storage-provisioner" id=137cd6d3-4d81-4c2f-8dea-d27a95295cd7 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.918155037Z" level=info msg="Starting container: f53cda0af472e0179ac68516d7550b20c474108c95d22e22223b7dfa9eda7a15" id=16bebaa4-e904-4e42-8087-0a748cce6b21 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:26:10 newest-cni-20210816222436-6487 crio[243]: time="2021-08-16 22:26:10.930999065Z" level=info msg="Started container f53cda0af472e0179ac68516d7550b20c474108c95d22e22223b7dfa9eda7a15: kube-system/storage-provisioner/storage-provisioner" id=16bebaa4-e904-4e42-8087-0a748cce6b21 name=/runtime.v1alpha2.RuntimeService/StartContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	f53cda0af472e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   56 seconds ago       Exited              storage-provisioner       0                   4162e9d2bc7a1
	753269be7f7c3       ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c   57 seconds ago       Running             kube-proxy                1                   ff5a70388e769
	13ea21137eecc       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb   57 seconds ago       Running             kindnet-cni               1                   2f59203e18012
	fe20235c8dfab       b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a   About a minute ago   Running             kube-apiserver            1                   cef5c6e154414
	3240bbb3d9227       0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba   About a minute ago   Running             etcd                      1                   413ba52501c76
	58841932c26d9       cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c   About a minute ago   Running             kube-controller-manager   1                   a82deb4862d7a
	57e924dea5914       7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75   About a minute ago   Running             kube-scheduler            1                   23128383f5cf7
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +0.895921] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000003] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +1.763890] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000003] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +0.831977] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-6a14296a1513
	[  +0.000003] ll header: 00000000: 02 42 33 e7 ff 29 02 42 c0 a8 31 02 08 00        .B3..).B..1...
	[  +2.811776] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000002] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +2.832077] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000003] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +4.335384] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000003] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[Aug16 22:27] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000003] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[ +13.663740] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth55ef9b3c
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 06 37 a8 8c 4d 9e 08 06        .......7..M...
	[  +2.163880] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethb864e10f
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 26 f5 50 a9 1a cc 08 06        ......&.P.....
	[  +0.707561] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth9c8775f6
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 4a 1b 78 c1 d0 58 08 06        ......J.x..X..
	[  +0.000675] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth6f717d76
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff aa 3a da 18 32 b9 08 06        .......:..2...
	[ +12.646052] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000002] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	
	* 
	* ==> etcd [3240bbb3d92273f37b404871473ddb237cebd4ed0410db9db3d7f10a4f130c9d] <==
	* {"level":"info","ts":"2021-08-16T22:26:05.022Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2021-08-16T22:26:05.024Z","caller":"etcdserver/server.go:834","msg":"starting etcd server","local-member-id":"8688e899f7831fc7","local-server-version":"3.5.0","cluster-id":"9d8fdeb88b6def78","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-16T22:26:05.024Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"8688e899f7831fc7","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2021-08-16T22:26:05.024Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2021-08-16T22:26:05.024Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2021-08-16T22:26:05.024Z","caller":"membership/cluster.go:523","msg":"updated cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","from":"3.5","to":"3.5"}
	{"level":"info","ts":"2021-08-16T22:26:05.027Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-08-16T22:26:05.027Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2021-08-16T22:26:05.027Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2021-08-16T22:26:05.027Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-08-16T22:26:05.027Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-08-16T22:26:05.518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2021-08-16T22:26:05.518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2021-08-16T22:26:05.518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2021-08-16T22:26:05.518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2021-08-16T22:26:05.518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2021-08-16T22:26:05.518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2021-08-16T22:26:05.518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2021-08-16T22:26:05.519Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:newest-cni-20210816222436-6487 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2021-08-16T22:26:05.519Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-16T22:26:05.519Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-16T22:26:05.519Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-08-16T22:26:05.519Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-08-16T22:26:05.520Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2021-08-16T22:26:05.520Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  22:28:07 up  1:07,  0 users,  load average: 1.47, 2.33, 2.24
	Linux newest-cni-20210816222436-6487 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [fe20235c8dfabaca60c374e04c6ca0505ac6982ff808879f0cf5404494b46e7a] <==
	* W0816 22:27:54.497508       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:27:54.535674       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:27:54.901317       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:27:54.951290       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:27:55.121563       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:27:55.492106       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:27:57.609203       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:27:59.282908       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	E0816 22:28:01.693277       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
	E0816 22:28:01.693352       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0816 22:28:01.694918       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0816 22:28:01.695945       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0816 22:28:01.697668       1 trace.go:205] Trace[1199722416]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:44d6e16b-ec19-4485-9d85-15dee745c43f,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (16-Aug-2021 22:27:01.692) (total time: 60005ms):
	Trace[1199722416]: [1m0.005006723s] [1m0.005006723s] END
	E0816 22:28:01.697744       1 timeout.go:135] post-timeout activity - time-elapsed: 4.324299ms, GET "/api/v1/namespaces/default" result: <nil>
	W0816 22:28:04.125929       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	I0816 22:28:07.462711       1 trace.go:205] Trace[6610159]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:500,continue: (16-Aug-2021 22:27:07.462) (total time: 59999ms):
	Trace[6610159]: [59.999889281s] [59.999889281s] END
	E0816 22:28:07.462742       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
	E0816 22:28:07.462804       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0816 22:28:07.463924       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0816 22:28:07.465050       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0816 22:28:07.466211       1 trace.go:205] Trace[11672189]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:d96e1c9e-d7a8-45f4-9e5e-b4506af6e61d,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (16-Aug-2021 22:27:07.462) (total time: 60003ms):
	Trace[11672189]: [1m0.003405992s] [1m0.003405992s] END
	E0816 22:28:07.470699       1 timeout.go:135] post-timeout activity - time-elapsed: 7.861564ms, GET "/api/v1/nodes" result: <nil>
	
	* 
	* ==> kube-controller-manager [58841932c26d9bee71f69ef725fd6e6bebfca6f808151590730af393a22c4366] <==
	* I0816 22:26:12.144608       1 controllermanager.go:577] Started "serviceaccount"
	I0816 22:26:12.144734       1 serviceaccounts_controller.go:117] Starting service account controller
	I0816 22:26:12.144744       1 shared_informer.go:240] Waiting for caches to sync for service account
	I0816 22:26:12.148311       1 controllermanager.go:577] Started "daemonset"
	I0816 22:26:12.148401       1 daemon_controller.go:284] Starting daemon sets controller
	I0816 22:26:12.148457       1 shared_informer.go:240] Waiting for caches to sync for daemon sets
	I0816 22:26:12.150576       1 controllermanager.go:577] Started "replicaset"
	I0816 22:26:12.150693       1 replica_set.go:186] Starting replicaset controller
	I0816 22:26:12.150712       1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
	I0816 22:26:12.152360       1 controllermanager.go:577] Started "csrapproving"
	I0816 22:26:12.152478       1 certificate_controller.go:118] Starting certificate controller "csrapproving"
	I0816 22:26:12.152493       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving
	I0816 22:26:12.154253       1 controllermanager.go:577] Started "statefulset"
	I0816 22:26:12.154380       1 stateful_set.go:148] Starting stateful set controller
	I0816 22:26:12.154406       1 shared_informer.go:240] Waiting for caches to sync for stateful set
	I0816 22:26:12.156021       1 controllermanager.go:577] Started "persistentvolume-binder"
	I0816 22:26:12.156115       1 pv_controller_base.go:308] Starting persistent volume controller
	I0816 22:26:12.156133       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
	I0816 22:26:12.159038       1 controllermanager.go:577] Started "disruption"
	I0816 22:26:12.159152       1 disruption.go:363] Starting disruption controller
	I0816 22:26:12.159169       1 shared_informer.go:240] Waiting for caches to sync for disruption
	I0816 22:26:12.160695       1 node_ipam_controller.go:91] Sending events to api server.
	I0816 22:26:12.221698       1 shared_informer.go:247] Caches are synced for tokens 
	W0816 22:26:52.235854       1 client_builder_dynamic.go:197] get or create service account failed: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout
	W0816 22:27:52.737581       1 client_builder_dynamic.go:197] get or create service account failed: the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts node-controller)
	
	* 
	* ==> kube-proxy [753269be7f7c38f2867ce6b87e47c113544230521c2c9398d83b19782ced1605] <==
	* I0816 22:26:10.355577       1 node.go:172] Successfully retrieved node IP: 192.168.67.2
	I0816 22:26:10.355649       1 server_others.go:140] Detected node IP 192.168.67.2
	W0816 22:26:10.355665       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0816 22:26:10.381353       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0816 22:26:10.381411       1 server_others.go:212] Using iptables Proxier.
	I0816 22:26:10.381426       1 server_others.go:219] creating dualStackProxier for iptables.
	W0816 22:26:10.381446       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0816 22:26:10.381755       1 server.go:649] Version: v1.22.0-rc.0
	I0816 22:26:10.382328       1 config.go:315] Starting service config controller
	I0816 22:26:10.382361       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:26:10.382404       1 config.go:224] Starting endpoint slice config controller
	I0816 22:26:10.382426       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	E0816 22:26:10.412645       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"newest-cni-20210816222436-6487.169be9d0237b2576", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03ed76096c98cb5, ext:100367828, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-newest-cni-20210816222436-6487", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Name:
"newest-cni-20210816222436-6487", UID:"newest-cni-20210816222436-6487", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "newest-cni-20210816222436-6487.169be9d0237b2576" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0816 22:26:10.483401       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0816 22:26:10.484094       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [57e924dea59145634016260af8f63a36e510063a313daaa5f72b235d19db5bbc] <==
	* W0816 22:26:04.778787       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0816 22:26:05.520406       1 serving.go:347] Generated self-signed cert in-memory
	I0816 22:26:08.540205       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0816 22:26:08.540236       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 22:26:08.540249       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0816 22:26:08.540253       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 22:26:08.540273       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0816 22:26:08.540302       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0816 22:26:08.540625       1 secure_serving.go:195] Serving securely on 127.0.0.1:10259
	I0816 22:26:08.540696       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0816 22:26:08.640851       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController 
	I0816 22:26:08.640864       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0816 22:26:08.641008       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:25:48 UTC, end at Mon 2021-08-16 22:28:07 UTC. --
	Aug 16 22:26:08 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:08.813522     810 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a71ed147-1a32-4360-9bcc-722db25ff42e-tmp\") pod \"storage-provisioner\" (UID: \"a71ed147-1a32-4360-9bcc-722db25ff42e\") "
	Aug 16 22:26:08 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:08.813548     810 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8b98c23d-9fa2-44dd-b9af-b1bf3215cd88-tmp-dir\") pod \"metrics-server-7c784ccb57-j52xp\" (UID: \"8b98c23d-9fa2-44dd-b9af-b1bf3215cd88\") "
	Aug 16 22:26:08 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:08.813589     810 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzt2l\" (UniqueName: \"kubernetes.io/projected/8b98c23d-9fa2-44dd-b9af-b1bf3215cd88-kube-api-access-mzt2l\") pod \"metrics-server-7c784ccb57-j52xp\" (UID: \"8b98c23d-9fa2-44dd-b9af-b1bf3215cd88\") "
	Aug 16 22:26:08 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:08.813616     810 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91a06e4b-7a8f-4f7c-a698-3f40c4024f1f-xtables-lock\") pod \"kube-proxy-242br\" (UID: \"91a06e4b-7a8f-4f7c-a698-3f40c4024f1f\") "
	Aug 16 22:26:08 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:08.813642     810 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg56l\" (UniqueName: \"kubernetes.io/projected/91a06e4b-7a8f-4f7c-a698-3f40c4024f1f-kube-api-access-sg56l\") pod \"kube-proxy-242br\" (UID: \"91a06e4b-7a8f-4f7c-a698-3f40c4024f1f\") "
	Aug 16 22:26:08 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:08.813672     810 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f784c344-70ae-41f8-b749-4bd3d26179d1-xtables-lock\") pod \"kindnet-4wtm6\" (UID: \"f784c344-70ae-41f8-b749-4bd3d26179d1\") "
	Aug 16 22:26:08 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:08.813698     810 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f784c344-70ae-41f8-b749-4bd3d26179d1-lib-modules\") pod \"kindnet-4wtm6\" (UID: \"f784c344-70ae-41f8-b749-4bd3d26179d1\") "
	Aug 16 22:26:08 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:08.813712     810 reconciler.go:157] "Reconciler: start to sync state"
	Aug 16 22:26:08 newest-cni-20210816222436-6487 kubelet[810]: E0816 22:26:08.835660     810 kubelet.go:2332] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Aug 16 22:26:09 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:09.186909     810 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzbk2\" (UniqueName: \"kubernetes.io/projected/6fe4486f-609a-4711-8984-d211fafbc14a-kube-api-access-qzbk2\") pod \"6fe4486f-609a-4711-8984-d211fafbc14a\" (UID: \"6fe4486f-609a-4711-8984-d211fafbc14a\") "
	Aug 16 22:26:09 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:09.186959     810 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fe4486f-609a-4711-8984-d211fafbc14a-config-volume\") pod \"6fe4486f-609a-4711-8984-d211fafbc14a\" (UID: \"6fe4486f-609a-4711-8984-d211fafbc14a\") "
	Aug 16 22:26:09 newest-cni-20210816222436-6487 kubelet[810]: W0816 22:26:09.187532     810 empty_dir.go:517] Warning: Failed to clear quota on /var/lib/kubelet/pods/6fe4486f-609a-4711-8984-d211fafbc14a/volumes/kubernetes.io~projected/kube-api-access-qzbk2: clearQuota called, but quotas disabled
	Aug 16 22:26:09 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:09.187580     810 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fe4486f-609a-4711-8984-d211fafbc14a-kube-api-access-qzbk2" (OuterVolumeSpecName: "kube-api-access-qzbk2") pod "6fe4486f-609a-4711-8984-d211fafbc14a" (UID: "6fe4486f-609a-4711-8984-d211fafbc14a"). InnerVolumeSpecName "kube-api-access-qzbk2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 16 22:26:09 newest-cni-20210816222436-6487 kubelet[810]: W0816 22:26:09.187718     810 empty_dir.go:517] Warning: Failed to clear quota on /var/lib/kubelet/pods/6fe4486f-609a-4711-8984-d211fafbc14a/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Aug 16 22:26:09 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:09.187831     810 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fe4486f-609a-4711-8984-d211fafbc14a-config-volume" (OuterVolumeSpecName: "config-volume") pod "6fe4486f-609a-4711-8984-d211fafbc14a" (UID: "6fe4486f-609a-4711-8984-d211fafbc14a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 16 22:26:09 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:09.288220     810 reconciler.go:319] "Volume detached for volume \"kube-api-access-qzbk2\" (UniqueName: \"kubernetes.io/projected/6fe4486f-609a-4711-8984-d211fafbc14a-kube-api-access-qzbk2\") on node \"newest-cni-20210816222436-6487\" DevicePath \"\""
	Aug 16 22:26:09 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:09.288262     810 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fe4486f-609a-4711-8984-d211fafbc14a-config-volume\") on node \"newest-cni-20210816222436-6487\" DevicePath \"\""
	Aug 16 22:26:10 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:10.282747     810 request.go:665] Waited for 1.070351149s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/storage-provisioner/token
	Aug 16 22:26:10 newest-cni-20210816222436-6487 kubelet[810]: E0816 22:26:10.842405     810 pod_workers.go:747] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/metrics-server-7c784ccb57-j52xp" podUID=8b98c23d-9fa2-44dd-b9af-b1bf3215cd88
	Aug 16 22:26:10 newest-cni-20210816222436-6487 kubelet[810]: E0816 22:26:10.842508     810 pod_workers.go:747] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-78fcd69978-sh8hf" podUID=99ca4da4-63c0-4eb5-b1a9-824580994bf0
	Aug 16 22:26:11 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:11.841109     810 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=6fe4486f-609a-4711-8984-d211fafbc14a path="/var/lib/kubelet/pods/6fe4486f-609a-4711-8984-d211fafbc14a/volumes"
	Aug 16 22:26:12 newest-cni-20210816222436-6487 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 16 22:26:12 newest-cni-20210816222436-6487 kubelet[810]: I0816 22:26:12.753868     810 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Aug 16 22:26:12 newest-cni-20210816222436-6487 systemd[1]: kubelet.service: Succeeded.
	Aug 16 22:26:12 newest-cni-20210816222436-6487 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [f53cda0af472e0179ac68516d7550b20c474108c95d22e22223b7dfa9eda7a15] <==
	* 
	goroutine 89 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc00032c310, 0x3)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc00032c300)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc000374480, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc00043af00, 0x18e5530, 0xc00004a100, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00052e0e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00052e0e0, 0x18b3d60, 0xc0004bd4a0, 0x1, 0xc00019a180)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00052e0e0, 0x3b9aca00, 0x0, 0x1, 0xc00019a180)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc00052e0e0, 0x3b9aca00, 0xc00019a180)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:28:07.466069  273056 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: "\n** stderr ** \nError from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (115.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (109.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20210816221939-6487 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-different-port-20210816221939-6487 --alsologtostderr -v=1: exit status 80 (1.919393733s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-different-port-20210816221939-6487 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 22:26:57.070983  272036 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:26:57.071084  272036 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:26:57.071093  272036 out.go:311] Setting ErrFile to fd 2...
	I0816 22:26:57.071096  272036 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:26:57.071211  272036 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:26:57.071353  272036 out.go:305] Setting JSON to false
	I0816 22:26:57.071372  272036 mustload.go:65] Loading cluster: default-k8s-different-port-20210816221939-6487
	I0816 22:26:57.071664  272036 config.go:177] Loaded profile config "default-k8s-different-port-20210816221939-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 22:26:57.072074  272036 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210816221939-6487 --format={{.State.Status}}
	I0816 22:26:57.110495  272036 host.go:66] Checking if "default-k8s-different-port-20210816221939-6487" exists ...
	I0816 22:26:57.111273  272036 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-different-port-20210816221939-6487 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0816 22:26:57.113902  272036 out.go:177] * Pausing node default-k8s-different-port-20210816221939-6487 ... 
	I0816 22:26:57.113936  272036 host.go:66] Checking if "default-k8s-different-port-20210816221939-6487" exists ...
	I0816 22:26:57.114163  272036 ssh_runner.go:149] Run: systemctl --version
	I0816 22:26:57.114204  272036 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210816221939-6487
	I0816 22:26:57.153435  272036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816221939-6487/id_rsa Username:docker}
	I0816 22:26:57.247750  272036 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:26:57.256745  272036 pause.go:50] kubelet running: true
	I0816 22:26:57.256790  272036 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0816 22:26:57.402617  272036 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0816 22:26:57.402690  272036 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0816 22:26:57.473700  272036 cri.go:76] found id: "cc250616450056102bb06e4b2a7a294752cbf8992bdf38ffd039f70b9bf5d938"
	I0816 22:26:57.473724  272036 cri.go:76] found id: "9913deebfe52a0cf2858d139bbde3a6115b0f3e565c62fc3705dbf7a8fe23971"
	I0816 22:26:57.473729  272036 cri.go:76] found id: "9b088643e34705cdcaf3bbd07da7d00a273437fec552d23dbd96fda614d5a6f3"
	I0816 22:26:57.473733  272036 cri.go:76] found id: "4014814f8de6c11880e36f0bb888b4734c3a928cbb02aaf16da299c541e2a01d"
	I0816 22:26:57.473737  272036 cri.go:76] found id: "424818e3cd13674e399c36ecfdfa799fadb4897a5ee828f9351784d6deaf5547"
	I0816 22:26:57.473741  272036 cri.go:76] found id: "8fda9c602da541a3efd623062eb9b12546ce7f7dbe30779b8dcc048fafb8e49d"
	I0816 22:26:57.473745  272036 cri.go:76] found id: "5f1c7a968a7136b41987a6c94cc5f792b8425e5a3aacaefae69394da84ed0a4c"
	I0816 22:26:57.473750  272036 cri.go:76] found id: "ed8f2fd04b8028dc3b19dd83f0b06d817cfad8c6bb15b23ffd738c0796981129"
	I0816 22:26:57.473753  272036 cri.go:76] found id: "8ab6c5882403dd54d5afc307dfb53f78df4f2f9a172130291e0fde8b2dd0d34f"
	I0816 22:26:57.473760  272036 cri.go:76] found id: "79f989d1f87dfbbfdcfeb997afdb759885178f62c07cdaf29baf001761967c6d"
	I0816 22:26:57.473763  272036 cri.go:76] found id: ""
	I0816 22:26:57.473798  272036 ssh_runner.go:149] Run: sudo runc list -f json

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p default-k8s-different-port-20210816221939-6487 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect default-k8s-different-port-20210816221939-6487
helpers_test.go:236: (dbg) docker inspect default-k8s-different-port-20210816221939-6487:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "05d52cd72bdcf94c00d8b611c45d4aeff6bddfed7127e78286ad1e981b86d2f2",
	        "Created": "2021-08-16T22:19:40.83769465Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 238890,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-16T22:21:04.019065668Z",
	            "FinishedAt": "2021-08-16T22:21:01.776663505Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/05d52cd72bdcf94c00d8b611c45d4aeff6bddfed7127e78286ad1e981b86d2f2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/05d52cd72bdcf94c00d8b611c45d4aeff6bddfed7127e78286ad1e981b86d2f2/hostname",
	        "HostsPath": "/var/lib/docker/containers/05d52cd72bdcf94c00d8b611c45d4aeff6bddfed7127e78286ad1e981b86d2f2/hosts",
	        "LogPath": "/var/lib/docker/containers/05d52cd72bdcf94c00d8b611c45d4aeff6bddfed7127e78286ad1e981b86d2f2/05d52cd72bdcf94c00d8b611c45d4aeff6bddfed7127e78286ad1e981b86d2f2-json.log",
	        "Name": "/default-k8s-different-port-20210816221939-6487",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20210816221939-6487:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20210816221939-6487",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/30e36cb4e5ddf168069308e3752ddd464c4c96b2e080fcc484bb7f568ecc42a3-init/diff:/var/lib/docker/overlay2/26ccb05b45c39eb8fb9a222c35182ae0b2281818311893cc1c820d93f21711ae/diff:/var/lib/docker/overlay2/cd717b1332cc33945d2b9d777346faa34d6bdfa47023daf36a4373532eccf421/diff:/var/lib/docker/overlay2/d9b45e6cc99a52e8f81dfe312587e9103ceac6c138634c90ec0d647cd90dcdc6/diff:/var/lib/docker/overlay2/253a269adcaa7871d92fb351ceb4d32421104cd62f741ba1c63d8e3f2e19a0b5/diff:/var/lib/docker/overlay2/42adedc2a706910fdba591e62cbfab57dd03d85b882f6eef38c3e131a0c52bea/diff:/var/lib/docker/overlay2/77ed2008192a16e91da5c6b9ccc6e71cea338a7cb24d331dc695e5f74fb04573/diff:/var/lib/docker/overlay2/29dccf868a296ce0889c929159fd3ac0dcdc00ba6e2ef864b056335954ffab4d/diff:/var/lib/docker/overlay2/592818f0a9ab8c99c2438562d9c5a479083cb08f41ad30ddefa9b493a8098150/diff:/var/lib/docker/overlay2/05fa44df096fbdb60f935bef69e952369e10cd8f1adc570c170c1f08d09743f7/diff:/var/lib/docker/overlay2/65cf48
1c471f231a663edc6331272317a851b0fd3395b2d6be43899f9b5443f9/diff:/var/lib/docker/overlay2/86a49202a7a2abea1a7a57c52f2e19cf991e9319b11dcf7f4105892413e0096e/diff:/var/lib/docker/overlay2/6f1a2a1805e90af512eef163f484935d57b0a0a72a65b5775c38efb5fe937d13/diff:/var/lib/docker/overlay2/f514eb992d78367550d2ce1c020771f984eff047302d6af3dcd71172270c0087/diff:/var/lib/docker/overlay2/cdae383a0302a2ed835985028caad813d6e16c7caf0e146005b47a4a1e94369b/diff:/var/lib/docker/overlay2/1fe02987c4f18db439e026431c44d1b8b233da766c4dcd331a4c20fd91e88748/diff:/var/lib/docker/overlay2/28bb6b828668bb682f3523f9f71fbd429ef6180cd98455625ea014ef4e532b77/diff:/var/lib/docker/overlay2/2878ae01f9dccb219097c2fc5873be677fe8a70e2bd936d25f018483a3d6c1a3/diff:/var/lib/docker/overlay2/da86efa9bad2d8b46c88440b1c4864add391f7cc9ee07d4227dd37d76c9ffd85/diff:/var/lib/docker/overlay2/93bf17ee1f2be70c70aa38b3aea9daa3a62e189181d1aa56acc52f5cf90f7436/diff:/var/lib/docker/overlay2/e8db603cd7ef789cc310bf7782375ee624d80ac0bceb3a3075900e67540c152a/diff:/var/lib/d
ocker/overlay2/1ba1c50d72480f49b835b34db6c3b21bccdf6530a8201631099a7e287dbf94a6/diff:/var/lib/docker/overlay2/2eeb0aecf508bc4a797d240b330bfa5b54b84a085737a96dad9faf9ea129d4ce/diff:/var/lib/docker/overlay2/40ba0dc970df76cce520350594f86384e121b47a190106559752e8f7cf138540/diff:/var/lib/docker/overlay2/8b80e2d0e53fb136919fd9ca7d4d198a92c324a19aa1827fcd672a3aa49a0dcc/diff:/var/lib/docker/overlay2/bd1d10c9a8876242ad61e3841c439177c3ae3a1967e5b053729107676134c622/diff:/var/lib/docker/overlay2/1f65ee5880c0ce6e027f2e72118cb956fcdcba9bb0806d98de6532bcbdfac6dd/diff:/var/lib/docker/overlay2/8bde131df5f26fa8ad65d4ddcb9ddb84de567427ffeb4aee76487489e530ef77/diff:/var/lib/docker/overlay2/73a68cda04f65298e5bd2581f0f16a8de96cd42145968d5c39b0c8ae3228a423/diff:/var/lib/docker/overlay2/0acfb650c0dfd3b9df005fe48f305b155d61d21914271c258e95bf42636e9bf8/diff:/var/lib/docker/overlay2/33e44b42ef88ffe285cb67e240248a629167dd1c83362d51a67679cbd866c30b/diff:/var/lib/docker/overlay2/c71ddf9e0475e4a3987bf3d616723e9d055a79c4bb6d92cf34fc67be2d9
c5134/diff:/var/lib/docker/overlay2/faece61d8cf45bab42cd499befcdb8d4548229311b35600cced068a3ce186025/diff:/var/lib/docker/overlay2/5e45db6ab7620f8baf25ff5ff01c806cd64575cdc89e5bb2965a25800093d5f9/diff:/var/lib/docker/overlay2/7324fe47bf39776cd61539f09f63fbb6f5e17aedc985bcc48bc12cdaaff4c4e4/diff:/var/lib/docker/overlay2/98a50fa9fa33654b2ed4ce8f7118f04d7004260519abca0d403b2ab337e9c273/diff:/var/lib/docker/overlay2/0e38707c109f3f1c54c7190021d525f898a86b07f2e51c86dc3e6d916eff3ed7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/30e36cb4e5ddf168069308e3752ddd464c4c96b2e080fcc484bb7f568ecc42a3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/30e36cb4e5ddf168069308e3752ddd464c4c96b2e080fcc484bb7f568ecc42a3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/30e36cb4e5ddf168069308e3752ddd464c4c96b2e080fcc484bb7f568ecc42a3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20210816221939-6487",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20210816221939-6487/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20210816221939-6487",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20210816221939-6487",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20210816221939-6487",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1c01efb1697beacb1b10018f6698e40f168542a6fce479e8f3b57d471dcbb711",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32954"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32953"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32950"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32952"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32951"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1c01efb1697b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20210816221939-6487": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "05d52cd72bdc"
	                    ],
	                    "NetworkID": "6a14296a15132239c81b36a2db1275d8af1a3ea741aaec5494005057cf547d13",
	                    "EndpointID": "70c88936941dd9b3d9600a590a64a27b951033e05cca2ed4a34f93adca901bfd",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210816221939-6487 -n default-k8s-different-port-20210816221939-6487

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210816221939-6487 -n default-k8s-different-port-20210816221939-6487: exit status 2 (17.313005045s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:27:16.295328  272362 status.go:422] Error apiserver status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20210816221939-6487 logs -n 25
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p default-k8s-different-port-20210816221939-6487 logs -n 25: exit status 110 (13.68405483s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| profile | list --output json                                         | minikube                                       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:35 UTC | Mon, 16 Aug 2021 22:19:38 UTC |
	| delete  | -p pause-20210816221349-6487                               | pause-20210816221349-6487                      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:38 UTC | Mon, 16 Aug 2021 22:19:38 UTC |
	| delete  | -p                                                         | disable-driver-mounts-20210816221938-6487      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:38 UTC | Mon, 16 Aug 2021 22:19:39 UTC |
	|         | disable-driver-mounts-20210816221938-6487                  |                                                |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:39 UTC | Mon, 16 Aug 2021 22:20:32 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:40 UTC | Mon, 16 Aug 2021 22:20:41 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:13 UTC | Mon, 16 Aug 2021 22:20:45 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:53 UTC | Mon, 16 Aug 2021 22:20:54 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:41 UTC | Mon, 16 Aug 2021 22:21:02 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:02 UTC | Mon, 16 Aug 2021 22:21:02 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| stop    | -p                                                         | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:54 UTC | Mon, 16 Aug 2021 22:21:15 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:15 UTC | Mon, 16 Aug 2021 22:21:15 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210816221555-6487                          | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:18:20 UTC | Mon, 16 Aug 2021 22:24:11 UTC |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:26 UTC | Mon, 16 Aug 2021 22:24:26 UTC |
	|         | no-preload-20210816221555-6487                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| -p      | no-preload-20210816221555-6487                             | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:29 UTC | Mon, 16 Aug 2021 22:24:29 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | no-preload-20210816221555-6487                             | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:30 UTC | Mon, 16 Aug 2021 22:24:31 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:32 UTC | Mon, 16 Aug 2021 22:24:35 UTC |
	|         | no-preload-20210816221555-6487                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:36 UTC | Mon, 16 Aug 2021 22:24:36 UTC |
	|         | no-preload-20210816221555-6487                             |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20210816222436-6487 --memory=2200            | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:36 UTC | Mon, 16 Aug 2021 22:25:25 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:25 UTC | Mon, 16 Aug 2021 22:25:25 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:25 UTC | Mon, 16 Aug 2021 22:25:46 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:46 UTC | Mon, 16 Aug 2021 22:25:46 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20210816222436-6487 --memory=2200            | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:46 UTC | Mon, 16 Aug 2021 22:26:11 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:26:12 UTC | Mon, 16 Aug 2021 22:26:12 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:02 UTC | Mon, 16 Aug 2021 22:26:46 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:26:56 UTC | Mon, 16 Aug 2021 22:26:57 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:25:46
	Running on machine: debian-jenkins-agent-13
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:25:46.856773  262957 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:25:46.856848  262957 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:25:46.856858  262957 out.go:311] Setting ErrFile to fd 2...
	I0816 22:25:46.856861  262957 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:25:46.856963  262957 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:25:46.857212  262957 out.go:305] Setting JSON to false
	I0816 22:25:46.893957  262957 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-13","uptime":3914,"bootTime":1629148833,"procs":365,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:25:46.894067  262957 start.go:121] virtualization: kvm guest
	I0816 22:25:46.896379  262957 out.go:177] * [newest-cni-20210816222436-6487] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:25:46.897973  262957 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:25:46.896522  262957 notify.go:169] Checking for updates...
	I0816 22:25:46.899468  262957 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:25:46.900988  262957 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:25:46.902492  262957 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:25:46.902900  262957 config.go:177] Loaded profile config "newest-cni-20210816222436-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0816 22:25:46.903274  262957 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:25:46.950656  262957 docker.go:132] docker version: linux-19.03.15
	I0816 22:25:46.950732  262957 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:25:47.034524  262957 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-16 22:25:46.986320519 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 22:25:47.034654  262957 docker.go:244] overlay module found
	I0816 22:25:47.037282  262957 out.go:177] * Using the docker driver based on existing profile
	I0816 22:25:47.037307  262957 start.go:278] selected driver: docker
	I0816 22:25:47.037313  262957 start.go:751] validating driver "docker" against &{Name:newest-cni-20210816222436-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816222436-6487 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true
extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:25:47.037417  262957 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0816 22:25:47.037459  262957 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:25:47.037480  262957 out.go:242] ! Your cgroup does not allow setting memory.
	I0816 22:25:47.039083  262957 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:25:47.040150  262957 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:25:47.119162  262957 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-16 22:25:47.075605257 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0816 22:25:47.119274  262957 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:25:47.119298  262957 out.go:242] ! Your cgroup does not allow setting memory.
	I0816 22:25:47.121212  262957 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:25:47.121330  262957 start_flags.go:716] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0816 22:25:47.121355  262957 cni.go:93] Creating CNI manager for ""
	I0816 22:25:47.121364  262957 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:25:47.121376  262957 start_flags.go:277] config:
	{Name:newest-cni-20210816222436-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816222436-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m
0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:25:47.123081  262957 out.go:177] * Starting control plane node newest-cni-20210816222436-6487 in cluster newest-cni-20210816222436-6487
	I0816 22:25:47.123113  262957 cache.go:117] Beginning downloading kic base image for docker with crio
	I0816 22:25:47.124788  262957 out.go:177] * Pulling base image ...
	I0816 22:25:47.124814  262957 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0816 22:25:47.124838  262957 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0816 22:25:47.124853  262957 cache.go:56] Caching tarball of preloaded images
	I0816 22:25:47.124910  262957 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0816 22:25:47.125039  262957 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 22:25:47.125058  262957 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on crio
	I0816 22:25:47.125170  262957 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/config.json ...
	I0816 22:25:47.212531  262957 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0816 22:25:47.212557  262957 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0816 22:25:47.212577  262957 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:25:47.212610  262957 start.go:313] acquiring machines lock for newest-cni-20210816222436-6487: {Name:mkd90dd1df90e2f23e61f524a3ae6e1a65dd1b39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:47.212710  262957 start.go:317] acquired machines lock for "newest-cni-20210816222436-6487" in 80.626µs
	I0816 22:25:47.212739  262957 start.go:93] Skipping create...Using existing machine configuration
	I0816 22:25:47.212748  262957 fix.go:55] fixHost starting: 
	I0816 22:25:47.212988  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:25:47.251771  262957 fix.go:108] recreateIfNeeded on newest-cni-20210816222436-6487: state=Stopped err=<nil>
	W0816 22:25:47.251798  262957 fix.go:134] unexpected machine state, will restart: <nil>
	I0816 22:25:44.995113  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:46.995369  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:47.650229  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:49.650872  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:47.254057  262957 out.go:177] * Restarting existing docker container for "newest-cni-20210816222436-6487" ...
	I0816 22:25:47.254120  262957 cli_runner.go:115] Run: docker start newest-cni-20210816222436-6487
	I0816 22:25:48.586029  262957 cli_runner.go:168] Completed: docker start newest-cni-20210816222436-6487: (1.33187871s)
	I0816 22:25:48.586111  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:25:48.626755  262957 kic.go:420] container "newest-cni-20210816222436-6487" state is running.
	I0816 22:25:48.627256  262957 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210816222436-6487
	I0816 22:25:48.670009  262957 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/config.json ...
	I0816 22:25:48.670233  262957 machine.go:88] provisioning docker machine ...
	I0816 22:25:48.670255  262957 ubuntu.go:169] provisioning hostname "newest-cni-20210816222436-6487"
	I0816 22:25:48.670309  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:48.711043  262957 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:48.711197  262957 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32969 <nil> <nil>}
	I0816 22:25:48.711217  262957 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20210816222436-6487 && echo "newest-cni-20210816222436-6487" | sudo tee /etc/hostname
	I0816 22:25:48.711815  262957 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48490->127.0.0.1:32969: read: connection reset by peer
	I0816 22:25:49.495358  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:51.496029  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:53.497145  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:51.907195  262957 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20210816222436-6487
	
	I0816 22:25:51.907262  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:51.946396  262957 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:51.946596  262957 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32969 <nil> <nil>}
	I0816 22:25:51.946627  262957 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20210816222436-6487' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20210816222436-6487/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20210816222436-6487' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:25:52.071168  262957 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:25:52.071196  262957 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem ServerCertRemotePath:/e
tc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:25:52.071223  262957 ubuntu.go:177] setting up certificates
	I0816 22:25:52.071234  262957 provision.go:83] configureAuth start
	I0816 22:25:52.071275  262957 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210816222436-6487
	I0816 22:25:52.110550  262957 provision.go:138] copyHostCerts
	I0816 22:25:52.110621  262957 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:25:52.110633  262957 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:25:52.110696  262957 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:25:52.110798  262957 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:25:52.110811  262957 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:25:52.110835  262957 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:25:52.110969  262957 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:25:52.110981  262957 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:25:52.111006  262957 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0816 22:25:52.111059  262957 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20210816222436-6487 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20210816222436-6487]
	I0816 22:25:52.355600  262957 provision.go:172] copyRemoteCerts
	I0816 22:25:52.355664  262957 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:25:52.355720  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:52.396113  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:25:52.486667  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:25:52.503265  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0816 22:25:52.518138  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 22:25:52.533106  262957 provision.go:86] duration metric: configureAuth took 461.862959ms
	I0816 22:25:52.533124  262957 ubuntu.go:193] setting minikube options for container-runtime
	I0816 22:25:52.533292  262957 config.go:177] Loaded profile config "newest-cni-20210816222436-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0816 22:25:52.533391  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:52.573329  262957 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:52.573496  262957 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32969 <nil> <nil>}
	I0816 22:25:52.573517  262957 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 22:25:52.991954  262957 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 22:25:52.991986  262957 machine.go:91] provisioned docker machine in 4.321739549s
	I0816 22:25:52.991996  262957 start.go:267] post-start starting for "newest-cni-20210816222436-6487" (driver="docker")
	I0816 22:25:52.992007  262957 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:25:52.992069  262957 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:25:52.992113  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:53.032158  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:25:53.123013  262957 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:25:53.125495  262957 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0816 22:25:53.125515  262957 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0816 22:25:53.125523  262957 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0816 22:25:53.125528  262957 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0816 22:25:53.125536  262957 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:25:53.125574  262957 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:25:53.125646  262957 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem -> 64872.pem in /etc/ssl/certs
	I0816 22:25:53.125746  262957 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:25:53.131911  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:25:53.149155  262957 start.go:270] post-start completed in 157.141514ms
	I0816 22:25:53.149220  262957 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 22:25:53.149270  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:53.190433  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:25:53.275867  262957 fix.go:57] fixHost completed within 6.063112205s
	I0816 22:25:53.275893  262957 start.go:80] releasing machines lock for "newest-cni-20210816222436-6487", held for 6.063163627s
	I0816 22:25:53.275995  262957 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210816222436-6487
	I0816 22:25:53.317483  262957 ssh_runner.go:149] Run: systemctl --version
	I0816 22:25:53.317538  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:53.317562  262957 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:25:53.317640  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:53.361517  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:25:53.362854  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:25:53.489151  262957 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0816 22:25:53.499402  262957 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0816 22:25:53.507663  262957 docker.go:153] disabling docker service ...
	I0816 22:25:53.507710  262957 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:25:53.515840  262957 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:25:53.523795  262957 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:25:53.582896  262957 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:25:53.644285  262957 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:25:53.653611  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:25:53.665218  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0816 22:25:53.672674  262957 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0816 22:25:53.672699  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0816 22:25:53.680934  262957 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:25:53.686723  262957 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:25:53.686773  262957 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:25:53.693222  262957 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:25:53.698990  262957 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:25:53.756392  262957 ssh_runner.go:149] Run: sudo systemctl start crio
	I0816 22:25:53.765202  262957 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 22:25:53.765252  262957 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:25:53.768154  262957 start.go:413] Will wait 60s for crictl version
	I0816 22:25:53.768197  262957 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:25:53.794195  262957 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0816 22:25:53.794262  262957 ssh_runner.go:149] Run: crio --version
	I0816 22:25:53.852537  262957 ssh_runner.go:149] Run: crio --version
	I0816 22:25:53.912084  262957 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on CRI-O 1.20.3 ...
	I0816 22:25:53.912160  262957 cli_runner.go:115] Run: docker network inspect newest-cni-20210816222436-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 22:25:53.950048  262957 ssh_runner.go:149] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0816 22:25:53.953262  262957 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:25:53.963781  262957 out.go:177]   - kubelet.network-plugin=cni
	I0816 22:25:52.154162  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:54.650414  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:53.965324  262957 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0816 22:25:53.965406  262957 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0816 22:25:53.965459  262957 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:25:53.993612  262957 crio.go:424] all images are preloaded for cri-o runtime.
	I0816 22:25:53.993630  262957 crio.go:333] Images already preloaded, skipping extraction
	I0816 22:25:53.993667  262957 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:25:54.020097  262957 crio.go:424] all images are preloaded for cri-o runtime.
	I0816 22:25:54.020118  262957 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:25:54.020180  262957 ssh_runner.go:149] Run: crio config
	I0816 22:25:54.082979  262957 cni.go:93] Creating CNI manager for ""
	I0816 22:25:54.083003  262957 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:25:54.083013  262957 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0816 22:25:54.083024  262957 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20210816222436-6487 NodeName:newest-cni-20210816222436-6487 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:fa
lse] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:25:54.083168  262957 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "newest-cni-20210816222436-6487"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:25:54.083284  262957 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20210816222436-6487 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816222436-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0816 22:25:54.083346  262957 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0816 22:25:54.090012  262957 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:25:54.090068  262957 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:25:54.096369  262957 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (603 bytes)
	I0816 22:25:54.107861  262957 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0816 22:25:54.119303  262957 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I0816 22:25:54.130633  262957 ssh_runner.go:149] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0816 22:25:54.133217  262957 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:25:54.141396  262957 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487 for IP: 192.168.67.2
	I0816 22:25:54.141447  262957 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:25:54.141471  262957 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:25:54.141535  262957 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/client.key
	I0816 22:25:54.141563  262957 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/apiserver.key.c7fa3a9e
	I0816 22:25:54.141596  262957 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/proxy-client.key
	I0816 22:25:54.141717  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem (1338 bytes)
	W0816 22:25:54.141762  262957 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487_empty.pem, impossibly tiny 0 bytes
	I0816 22:25:54.141774  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 22:25:54.141803  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:25:54.141827  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:25:54.141848  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0816 22:25:54.141897  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:25:54.142744  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:25:54.158540  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 22:25:54.174181  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:25:54.190076  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 22:25:54.205410  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:25:54.220130  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 22:25:54.235298  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:25:54.251605  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 22:25:54.268123  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /usr/share/ca-certificates/64872.pem (1708 bytes)
	I0816 22:25:54.283499  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:25:54.298583  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem --> /usr/share/ca-certificates/6487.pem (1338 bytes)
	I0816 22:25:54.314024  262957 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:25:54.325021  262957 ssh_runner.go:149] Run: openssl version
	I0816 22:25:54.329401  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/64872.pem && ln -fs /usr/share/ca-certificates/64872.pem /etc/ssl/certs/64872.pem"
	I0816 22:25:54.335940  262957 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/64872.pem
	I0816 22:25:54.338596  262957 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:50 /usr/share/ca-certificates/64872.pem
	I0816 22:25:54.338638  262957 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/64872.pem
	I0816 22:25:54.342906  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/64872.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:25:54.348826  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:25:54.358858  262957 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:25:54.361641  262957 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:25:54.361673  262957 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:25:54.365977  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:25:54.372154  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6487.pem && ln -fs /usr/share/ca-certificates/6487.pem /etc/ssl/certs/6487.pem"
	I0816 22:25:54.378858  262957 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6487.pem
	I0816 22:25:54.381576  262957 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:50 /usr/share/ca-certificates/6487.pem
	I0816 22:25:54.381623  262957 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6487.pem
	I0816 22:25:54.386036  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6487.pem /etc/ssl/certs/51391683.0"
	I0816 22:25:54.391898  262957 kubeadm.go:390] StartCluster: {Name:newest-cni-20210816222436-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816222436-6487 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:
false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:25:54.392022  262957 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 22:25:54.392052  262957 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:25:54.414245  262957 cri.go:76] found id: ""
	I0816 22:25:54.414284  262957 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:25:54.420413  262957 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0816 22:25:54.420436  262957 kubeadm.go:600] restartCluster start
	I0816 22:25:54.420466  262957 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0816 22:25:54.426072  262957 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:54.426966  262957 kubeconfig.go:117] verify returned: extract IP: "newest-cni-20210816222436-6487" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:25:54.427382  262957 kubeconfig.go:128] "newest-cni-20210816222436-6487" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0816 22:25:54.428106  262957 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:25:54.430425  262957 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 22:25:54.436260  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:54.436301  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:54.447743  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:54.648124  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:54.648202  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:54.661570  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:54.848823  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:54.848884  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:54.862082  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.048130  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:55.048196  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:55.061645  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.247861  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:55.247956  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:55.262026  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.448347  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:55.448414  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:55.461467  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.648695  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:55.648774  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:55.661684  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.847947  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:55.848042  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:55.862542  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:56.048736  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:56.048800  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:56.061836  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:56.248110  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:56.248200  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:56.261360  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:56.448639  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:56.448705  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:56.461500  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:56.648623  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:56.648703  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:56.662181  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.995370  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:58.495829  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:56.651402  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:58.651440  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:56.848603  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:56.848665  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:56.861212  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.048524  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:57.048591  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:57.061580  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.248828  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:57.248911  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:57.261828  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.448121  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:57.448188  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:57.461171  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.461189  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:57.461225  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:57.512239  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.512268  262957 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0816 22:25:57.512276  262957 kubeadm.go:1032] stopping kube-system containers ...
	I0816 22:25:57.512288  262957 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 22:25:57.512336  262957 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:25:57.536298  262957 cri.go:76] found id: ""
	I0816 22:25:57.536370  262957 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0816 22:25:57.545155  262957 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:25:57.551792  262957 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5643 Aug 16 22:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug 16 22:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Aug 16 22:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 16 22:24 /etc/kubernetes/scheduler.conf
	
	I0816 22:25:57.551856  262957 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 22:25:57.558184  262957 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 22:25:57.564274  262957 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 22:25:57.570245  262957 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.570290  262957 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 22:25:57.576131  262957 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 22:25:57.582547  262957 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.582595  262957 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 22:25:57.588511  262957 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:25:57.594494  262957 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0816 22:25:57.594510  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:57.636811  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:58.457317  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:58.574142  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:58.631732  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:58.680441  262957 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:25:58.680500  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:59.195406  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:59.695787  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:00.195739  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:00.694833  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:01.194883  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:01.695030  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:00.496189  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:02.994975  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:01.151777  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:03.650774  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:02.195405  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:02.695613  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:03.195523  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:03.695735  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:04.195172  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:04.695313  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:05.194844  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:05.228254  262957 api_server.go:70] duration metric: took 6.54781428s to wait for apiserver process to appear ...
	I0816 22:26:05.228278  262957 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:26:05.228288  262957 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:26:04.995640  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:06.995858  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:08.521501  262957 api_server.go:265] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:26:08.521534  262957 api_server.go:101] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:26:09.022198  262957 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:26:09.028317  262957 api_server.go:265] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:26:09.028345  262957 api_server.go:101] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:26:09.521603  262957 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:26:09.526189  262957 api_server.go:265] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:26:09.526218  262957 api_server.go:101] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:26:10.021661  262957 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:26:10.027811  262957 api_server.go:265] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0816 22:26:10.035180  262957 api_server.go:139] control plane version: v1.22.0-rc.0
	I0816 22:26:10.035212  262957 api_server.go:129] duration metric: took 4.806927084s to wait for apiserver health ...
	I0816 22:26:10.035225  262957 cni.go:93] Creating CNI manager for ""
	I0816 22:26:10.035233  262957 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:26:09.964461  238595 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (30.898528802s)
	I0816 22:26:09.964528  238595 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:26:09.973997  238595 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 22:26:09.974062  238595 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:26:09.999861  238595 cri.go:76] found id: ""
	I0816 22:26:09.999951  238595 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:26:10.007018  238595 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0816 22:26:10.007067  238595 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:26:10.013415  238595 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:26:10.013459  238595 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0816 22:26:05.657515  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:08.152187  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:10.115193  262957 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 22:26:10.115266  262957 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0816 22:26:10.120908  262957 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl ...
	I0816 22:26:10.120933  262957 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0816 22:26:10.134935  262957 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 22:26:10.353050  262957 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:26:10.366285  262957 system_pods.go:59] 10 kube-system pods found
	I0816 22:26:10.366331  262957 system_pods.go:61] "coredns-78fcd69978-nqx44" [6fe4486f-609a-4711-8984-d211fafbc14a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:26:10.366343  262957 system_pods.go:61] "coredns-78fcd69978-sh8hf" [99ca4da4-63c0-4eb5-b1a9-824580994bf0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:26:10.366357  262957 system_pods.go:61] "etcd-newest-cni-20210816222436-6487" [3161f296-8eb0-48ae-8bbd-1a3104e0c5cc] Running
	I0816 22:26:10.366369  262957 system_pods.go:61] "kindnet-4wtm6" [f784c344-70ae-41f8-b749-4bd3d26179d1] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0816 22:26:10.366379  262957 system_pods.go:61] "kube-apiserver-newest-cni-20210816222436-6487" [b48447a9-bba3-4ef7-89ab-ee9a020ac10b] Running
	I0816 22:26:10.366393  262957 system_pods.go:61] "kube-controller-manager-newest-cni-20210816222436-6487" [0d5a549e-612d-463d-9383-0ee0d9dd2a5c] Running
	I0816 22:26:10.366402  262957 system_pods.go:61] "kube-proxy-242br" [91a06e4b-7a8f-4f7c-a698-3f40c4024f1f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 22:26:10.366411  262957 system_pods.go:61] "kube-scheduler-newest-cni-20210816222436-6487" [c46e8919-9bc3-4dbc-ba13-c0d0b8c2ee7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 22:26:10.366419  262957 system_pods.go:61] "metrics-server-7c784ccb57-j52xp" [8b98c23d-9fa2-44dd-b9af-b1bf3215cd88] Pending
	I0816 22:26:10.366427  262957 system_pods.go:61] "storage-provisioner" [a71ed147-1a32-4360-9bcc-722db25ff42e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:26:10.366434  262957 system_pods.go:74] duration metric: took 13.36244ms to wait for pod list to return data ...
	I0816 22:26:10.366443  262957 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:26:10.369938  262957 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:26:10.369965  262957 node_conditions.go:123] node cpu capacity is 8
	I0816 22:26:10.369980  262957 node_conditions.go:105] duration metric: took 3.531866ms to run NodePressure ...
	I0816 22:26:10.370000  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:26:10.628407  262957 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:26:10.646464  262957 ops.go:34] apiserver oom_adj: -16
	I0816 22:26:10.646488  262957 kubeadm.go:604] restartCluster took 16.226044614s
	I0816 22:26:10.646497  262957 kubeadm.go:392] StartCluster complete in 16.254606324s
	I0816 22:26:10.646519  262957 settings.go:142] acquiring lock: {Name:mk71c9e00e0a208f5191d6b85d29a074b46503a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:26:10.646648  262957 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:26:10.648250  262957 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:26:10.653233  262957 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20210816222436-6487" rescaled to 1
	I0816 22:26:10.653302  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:26:10.653319  262957 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:26:10.653298  262957 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0816 22:26:10.653366  262957 addons.go:59] Setting storage-provisioner=true in profile "newest-cni-20210816222436-6487"
	I0816 22:26:10.653381  262957 addons.go:135] Setting addon storage-provisioner=true in "newest-cni-20210816222436-6487"
	W0816 22:26:10.653387  262957 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:26:10.653413  262957 host.go:66] Checking if "newest-cni-20210816222436-6487" exists ...
	I0816 22:26:10.653421  262957 addons.go:59] Setting default-storageclass=true in profile "newest-cni-20210816222436-6487"
	I0816 22:26:10.653452  262957 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20210816222436-6487"
	I0816 22:26:10.653502  262957 config.go:177] Loaded profile config "newest-cni-20210816222436-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0816 22:26:10.653557  262957 addons.go:59] Setting metrics-server=true in profile "newest-cni-20210816222436-6487"
	I0816 22:26:10.653574  262957 addons.go:135] Setting addon metrics-server=true in "newest-cni-20210816222436-6487"
	W0816 22:26:10.653581  262957 addons.go:147] addon metrics-server should already be in state true
	I0816 22:26:10.653607  262957 host.go:66] Checking if "newest-cni-20210816222436-6487" exists ...
	I0816 22:26:10.653741  262957 addons.go:59] Setting dashboard=true in profile "newest-cni-20210816222436-6487"
	I0816 22:26:10.653767  262957 addons.go:135] Setting addon dashboard=true in "newest-cni-20210816222436-6487"
	W0816 22:26:10.653776  262957 addons.go:147] addon dashboard should already be in state true
	I0816 22:26:10.653788  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:10.653811  262957 host.go:66] Checking if "newest-cni-20210816222436-6487" exists ...
	I0816 22:26:10.653952  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:10.654110  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:10.655941  262957 out.go:177] * Verifying Kubernetes components...
	I0816 22:26:10.654275  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:10.656048  262957 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:26:10.718128  262957 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:26:10.718268  262957 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:26:10.718285  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:26:10.718346  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:26:10.724823  262957 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:26:10.728589  262957 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:26:10.728694  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:26:10.728708  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:26:10.728778  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:26:10.372762  238595 out.go:204]   - Generating certificates and keys ...
	I0816 22:26:10.735593  262957 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:26:10.735667  262957 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:26:10.735676  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:26:10.735731  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:26:10.736243  262957 addons.go:135] Setting addon default-storageclass=true in "newest-cni-20210816222436-6487"
	W0816 22:26:10.736275  262957 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:26:10.736305  262957 host.go:66] Checking if "newest-cni-20210816222436-6487" exists ...
	I0816 22:26:10.736853  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:10.773177  262957 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:26:10.773242  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:10.774623  262957 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0816 22:26:10.789622  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:26:10.795332  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:26:10.807716  262957 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:26:10.807742  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:26:10.807797  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:26:10.818897  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:26:10.828432  262957 api_server.go:70] duration metric: took 175.046767ms to wait for apiserver process to appear ...
	I0816 22:26:10.828463  262957 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:26:10.828475  262957 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:26:10.835641  262957 api_server.go:265] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0816 22:26:10.836517  262957 api_server.go:139] control plane version: v1.22.0-rc.0
	I0816 22:26:10.836536  262957 api_server.go:129] duration metric: took 8.066334ms to wait for apiserver health ...
	I0816 22:26:10.836544  262957 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:26:10.844801  262957 system_pods.go:59] 10 kube-system pods found
	I0816 22:26:10.844830  262957 system_pods.go:61] "coredns-78fcd69978-nqx44" [6fe4486f-609a-4711-8984-d211fafbc14a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:26:10.844841  262957 system_pods.go:61] "coredns-78fcd69978-sh8hf" [99ca4da4-63c0-4eb5-b1a9-824580994bf0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:26:10.844849  262957 system_pods.go:61] "etcd-newest-cni-20210816222436-6487" [3161f296-8eb0-48ae-8bbd-1a3104e0c5cc] Running
	I0816 22:26:10.844862  262957 system_pods.go:61] "kindnet-4wtm6" [f784c344-70ae-41f8-b749-4bd3d26179d1] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0816 22:26:10.844871  262957 system_pods.go:61] "kube-apiserver-newest-cni-20210816222436-6487" [b48447a9-bba3-4ef7-89ab-ee9a020ac10b] Running
	I0816 22:26:10.844881  262957 system_pods.go:61] "kube-controller-manager-newest-cni-20210816222436-6487" [0d5a549e-612d-463d-9383-0ee0d9dd2a5c] Running
	I0816 22:26:10.844892  262957 system_pods.go:61] "kube-proxy-242br" [91a06e4b-7a8f-4f7c-a698-3f40c4024f1f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 22:26:10.844903  262957 system_pods.go:61] "kube-scheduler-newest-cni-20210816222436-6487" [c46e8919-9bc3-4dbc-ba13-c0d0b8c2ee7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 22:26:10.844920  262957 system_pods.go:61] "metrics-server-7c784ccb57-j52xp" [8b98c23d-9fa2-44dd-b9af-b1bf3215cd88] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:26:10.844930  262957 system_pods.go:61] "storage-provisioner" [a71ed147-1a32-4360-9bcc-722db25ff42e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:26:10.844937  262957 system_pods.go:74] duration metric: took 8.387271ms to wait for pod list to return data ...
	I0816 22:26:10.844948  262957 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:26:10.847353  262957 default_sa.go:45] found service account: "default"
	I0816 22:26:10.847370  262957 default_sa.go:55] duration metric: took 2.413533ms for default service account to be created ...
	I0816 22:26:10.847380  262957 kubeadm.go:547] duration metric: took 194.000457ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0816 22:26:10.847401  262957 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:26:10.849463  262957 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:26:10.849480  262957 node_conditions.go:123] node cpu capacity is 8
	I0816 22:26:10.849495  262957 node_conditions.go:105] duration metric: took 2.085396ms to run NodePressure ...
	I0816 22:26:10.849509  262957 start.go:231] waiting for startup goroutines ...
	I0816 22:26:10.862435  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:26:10.919082  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:26:10.919107  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:26:10.928768  262957 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:26:10.928790  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:26:10.936226  262957 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:26:10.939559  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:26:10.939580  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:26:10.947378  262957 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:26:10.947440  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:26:10.956321  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:26:10.956344  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:26:10.959897  262957 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:26:11.016118  262957 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:26:11.016141  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:26:11.021575  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:26:11.021599  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:26:11.031871  262957 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:26:11.038943  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:26:11.038964  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:26:11.137497  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:26:11.137523  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:26:11.217513  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:26:11.217538  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:26:11.232958  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:26:11.232983  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:26:11.248587  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:26:11.248612  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:26:11.327831  262957 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:26:11.543579  262957 addons.go:313] Verifying addon metrics-server=true in "newest-cni-20210816222436-6487"
	I0816 22:26:11.719432  262957 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0816 22:26:11.719457  262957 addons.go:344] enableAddons completed in 1.066141103s
	I0816 22:26:11.764284  262957 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0816 22:26:11.765766  262957 out.go:177] 
	W0816 22:26:11.765889  262957 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0816 22:26:11.767364  262957 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0816 22:26:11.768745  262957 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210816222436-6487" cluster and "default" namespace by default
	I0816 22:26:10.885563  238595 out.go:204]   - Booting up control plane ...
	I0816 22:26:09.496573  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:11.995982  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:10.651451  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:13.151979  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:15.153271  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:13.996111  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:16.495078  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:18.495570  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:17.651242  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:20.151935  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:20.496216  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:22.996199  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:24.933950  238595 out.go:204]   - Configuring RBAC rules ...
	I0816 22:26:25.352976  238595 cni.go:93] Creating CNI manager for ""
	I0816 22:26:25.353002  238595 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:26:22.650222  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:23.146708  240293 pod_ready.go:81] duration metric: took 4m0.400635585s waiting for pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace to be "Ready" ...
	E0816 22:26:23.146730  240293 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 22:26:23.146749  240293 pod_ready.go:38] duration metric: took 4m42.319875628s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:26:23.146776  240293 kubeadm.go:604] restartCluster took 4m59.914882197s
	W0816 22:26:23.146936  240293 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0816 22:26:23.146993  240293 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 22:26:25.355246  238595 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 22:26:25.355311  238595 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0816 22:26:25.358718  238595 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0816 22:26:25.358738  238595 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0816 22:26:25.370945  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 22:26:25.621157  238595 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:26:25.621206  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:25.621226  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=default-k8s-different-port-20210816221939-6487 minikube.k8s.io/updated_at=2021_08_16T22_26_25_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:25.733924  238595 ops.go:34] apiserver oom_adj: -16
	I0816 22:26:25.733912  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:26.298743  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:26.798723  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:27.298752  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:24.996387  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:27.495135  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:27.798667  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:28.298823  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:28.798898  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:29.299125  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:29.798939  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:30.298461  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:30.799163  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:31.298377  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:31.798518  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:32.299080  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:29.495517  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:31.495703  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:33.496362  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:32.798224  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:33.298433  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:33.799075  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:34.298503  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:34.798223  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:35.299182  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:35.798578  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:36.298228  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:36.798801  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:37.299144  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:35.996187  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:38.495700  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:37.798260  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:38.298197  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:38.798424  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:38.917845  238595 kubeadm.go:985] duration metric: took 13.296684424s to wait for elevateKubeSystemPrivileges.
	I0816 22:26:38.917877  238595 kubeadm.go:392] StartCluster complete in 5m29.078278154s
	I0816 22:26:38.917895  238595 settings.go:142] acquiring lock: {Name:mk71c9e00e0a208f5191d6b85d29a074b46503a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:26:38.917976  238595 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:26:38.919347  238595 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:26:39.435280  238595 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20210816221939-6487" rescaled to 1
	I0816 22:26:39.435337  238595 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 22:26:39.436884  238595 out.go:177] * Verifying Kubernetes components...
	I0816 22:26:39.435381  238595 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:26:39.436944  238595 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:26:39.435407  238595 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:26:39.437054  238595 addons.go:59] Setting storage-provisioner=true in profile "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.437066  238595 addons.go:59] Setting dashboard=true in profile "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.437084  238595 addons.go:59] Setting default-storageclass=true in profile "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.437097  238595 addons.go:135] Setting addon dashboard=true in "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.437107  238595 addons.go:59] Setting metrics-server=true in profile "default-k8s-different-port-20210816221939-6487"
	W0816 22:26:39.437111  238595 addons.go:147] addon dashboard should already be in state true
	I0816 22:26:39.437119  238595 addons.go:135] Setting addon metrics-server=true in "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.435601  238595 config.go:177] Loaded profile config "default-k8s-different-port-20210816221939-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	W0816 22:26:39.437127  238595 addons.go:147] addon metrics-server should already be in state true
	I0816 22:26:39.437075  238595 addons.go:135] Setting addon storage-provisioner=true in "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.437147  238595 host.go:66] Checking if "default-k8s-different-port-20210816221939-6487" exists ...
	I0816 22:26:39.437156  238595 host.go:66] Checking if "default-k8s-different-port-20210816221939-6487" exists ...
	W0816 22:26:39.437157  238595 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:26:39.437098  238595 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.437219  238595 host.go:66] Checking if "default-k8s-different-port-20210816221939-6487" exists ...
	I0816 22:26:39.437580  238595 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210816221939-6487 --format={{.State.Status}}
	I0816 22:26:39.437673  238595 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210816221939-6487 --format={{.State.Status}}
	I0816 22:26:39.437680  238595 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210816221939-6487 --format={{.State.Status}}
	I0816 22:26:39.437786  238595 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210816221939-6487 --format={{.State.Status}}
	I0816 22:26:39.450925  238595 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20210816221939-6487" to be "Ready" ...
	I0816 22:26:39.454454  238595 node_ready.go:49] node "default-k8s-different-port-20210816221939-6487" has status "Ready":"True"
	I0816 22:26:39.454478  238595 node_ready.go:38] duration metric: took 3.504801ms waiting for node "default-k8s-different-port-20210816221939-6487" to be "Ready" ...
	I0816 22:26:39.454492  238595 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:26:39.461585  238595 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-n6ddn" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:39.496014  238595 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:26:39.496143  238595 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:26:39.496159  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:26:39.497741  238595 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:26:39.496222  238595 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210816221939-6487
	I0816 22:26:39.497808  238595 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:26:39.497821  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:26:39.497865  238595 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210816221939-6487
	I0816 22:26:39.499561  238595 addons.go:135] Setting addon default-storageclass=true in "default-k8s-different-port-20210816221939-6487"
	W0816 22:26:39.499598  238595 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:26:39.499623  238595 host.go:66] Checking if "default-k8s-different-port-20210816221939-6487" exists ...
	I0816 22:26:39.500057  238595 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210816221939-6487 --format={{.State.Status}}
	I0816 22:26:39.508968  238595 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:26:39.510786  238595 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:26:39.510877  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:26:39.510894  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:26:39.510963  238595 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210816221939-6487
	I0816 22:26:39.543137  238595 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:26:39.551327  238595 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:26:39.551354  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:26:39.551418  238595 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210816221939-6487
	I0816 22:26:39.562469  238595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816221939-6487/id_rsa Username:docker}
	I0816 22:26:39.567015  238595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816221939-6487/id_rsa Username:docker}
	I0816 22:26:39.585895  238595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816221939-6487/id_rsa Username:docker}
	I0816 22:26:39.601932  238595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816221939-6487/id_rsa Username:docker}
	I0816 22:26:39.730192  238595 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:26:39.730216  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:26:39.735004  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:26:39.735028  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:26:39.825712  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:26:39.825735  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:26:39.828025  238595 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:26:39.828046  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:26:39.829939  238595 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:26:39.830581  238595 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:26:39.917562  238595 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:26:39.917594  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:26:39.918416  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:26:39.918442  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:26:39.934239  238595 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:26:39.935303  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:26:39.935323  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:26:40.024142  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:26:40.024168  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:26:40.121870  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:26:40.121954  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:26:40.213303  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:26:40.213329  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:26:40.226600  238595 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0816 22:26:40.233649  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:26:40.233674  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:26:40.315993  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:26:40.316021  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:26:40.329860  238595 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:26:40.913110  238595 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.08249574s)
	I0816 22:26:41.119373  238595 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.185088873s)
	I0816 22:26:41.119413  238595 addons.go:313] Verifying addon metrics-server=true in "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:41.513353  238595 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.183438758s)
	I0816 22:26:41.515520  238595 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0816 22:26:41.515560  238595 addons.go:344] enableAddons completed in 2.080164328s
	I0816 22:26:41.516293  238595 pod_ready.go:102] pod "coredns-558bd4d5db-n6ddn" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:40.996044  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:42.996463  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:43.970224  238595 pod_ready.go:102] pod "coredns-558bd4d5db-n6ddn" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:45.016130  238595 pod_ready.go:92] pod "coredns-558bd4d5db-n6ddn" in "kube-system" namespace has status "Ready":"True"
	I0816 22:26:45.016153  238595 pod_ready.go:81] duration metric: took 5.554536838s waiting for pod "coredns-558bd4d5db-n6ddn" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.016169  238595 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.020503  238595 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:26:45.020523  238595 pod_ready.go:81] duration metric: took 4.344641ms waiting for pod "etcd-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.020537  238595 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.024738  238595 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:26:45.024753  238595 pod_ready.go:81] duration metric: took 4.208942ms waiting for pod "kube-apiserver-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.024762  238595 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.028646  238595 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:26:45.028661  238595 pod_ready.go:81] duration metric: took 3.89128ms waiting for pod "kube-controller-manager-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.028670  238595 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4pmgn" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.032791  238595 pod_ready.go:92] pod "kube-proxy-4pmgn" in "kube-system" namespace has status "Ready":"True"
	I0816 22:26:45.032812  238595 pod_ready.go:81] duration metric: took 4.13529ms waiting for pod "kube-proxy-4pmgn" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.032823  238595 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.369533  238595 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:26:45.369559  238595 pod_ready.go:81] duration metric: took 336.726404ms waiting for pod "kube-scheduler-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.369571  238595 pod_ready.go:38] duration metric: took 5.915063438s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:26:45.369595  238595 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:26:45.369645  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:45.395595  238595 api_server.go:70] duration metric: took 5.960222514s to wait for apiserver process to appear ...
	I0816 22:26:45.395625  238595 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:26:45.395637  238595 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0816 22:26:45.400217  238595 api_server.go:265] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0816 22:26:45.401067  238595 api_server.go:139] control plane version: v1.21.3
	I0816 22:26:45.401089  238595 api_server.go:129] duration metric: took 5.457124ms to wait for apiserver health ...
	I0816 22:26:45.401099  238595 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:26:45.570973  238595 system_pods.go:59] 9 kube-system pods found
	I0816 22:26:45.571001  238595 system_pods.go:61] "coredns-558bd4d5db-n6ddn" [6ea194b9-97b1-4f8b-b10a-e53c58d0b815] Running
	I0816 22:26:45.571006  238595 system_pods.go:61] "etcd-default-k8s-different-port-20210816221939-6487" [c8c2a683-c482-47ca-888f-c223fa5fc1a2] Running
	I0816 22:26:45.571016  238595 system_pods.go:61] "kindnet-5x8jh" [3ef671f3-edb1-4112-8340-37f83bc48660] Running
	I0816 22:26:45.571020  238595 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210816221939-6487" [25fe7b38-f39b-4718-956a-b47bd970723a] Running
	I0816 22:26:45.571025  238595 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210816221939-6487" [24ad0f39-a711-48df-9b8e-d9e0e08ee4f6] Running
	I0816 22:26:45.571028  238595 system_pods.go:61] "kube-proxy-4pmgn" [e049b17c-98b6-4692-8e5a-61c73abc97c2] Running
	I0816 22:26:45.571032  238595 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210816221939-6487" [2c0db59b-68b7-4831-be15-4f882953eec5] Running
	I0816 22:26:45.571039  238595 system_pods.go:61] "metrics-server-7c784ccb57-lfkmq" [d9309c70-8cf5-4fdc-a79f-1c85f9ceda55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:26:45.571069  238595 system_pods.go:61] "storage-provisioner" [f876c73e-e1dd-4f3c-9bad-866adba8a427] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:26:45.571074  238595 system_pods.go:74] duration metric: took 169.970426ms to wait for pod list to return data ...
	I0816 22:26:45.571085  238595 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:26:45.768620  238595 default_sa.go:45] found service account: "default"
	I0816 22:26:45.768644  238595 default_sa.go:55] duration metric: took 197.553773ms for default service account to be created ...
	I0816 22:26:45.768653  238595 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:26:45.970940  238595 system_pods.go:86] 9 kube-system pods found
	I0816 22:26:45.970973  238595 system_pods.go:89] "coredns-558bd4d5db-n6ddn" [6ea194b9-97b1-4f8b-b10a-e53c58d0b815] Running
	I0816 22:26:45.970982  238595 system_pods.go:89] "etcd-default-k8s-different-port-20210816221939-6487" [c8c2a683-c482-47ca-888f-c223fa5fc1a2] Running
	I0816 22:26:45.970987  238595 system_pods.go:89] "kindnet-5x8jh" [3ef671f3-edb1-4112-8340-37f83bc48660] Running
	I0816 22:26:45.970993  238595 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20210816221939-6487" [25fe7b38-f39b-4718-956a-b47bd970723a] Running
	I0816 22:26:45.971000  238595 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20210816221939-6487" [24ad0f39-a711-48df-9b8e-d9e0e08ee4f6] Running
	I0816 22:26:45.971006  238595 system_pods.go:89] "kube-proxy-4pmgn" [e049b17c-98b6-4692-8e5a-61c73abc97c2] Running
	I0816 22:26:45.971013  238595 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20210816221939-6487" [2c0db59b-68b7-4831-be15-4f882953eec5] Running
	I0816 22:26:45.971024  238595 system_pods.go:89] "metrics-server-7c784ccb57-lfkmq" [d9309c70-8cf5-4fdc-a79f-1c85f9ceda55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:26:45.971037  238595 system_pods.go:89] "storage-provisioner" [f876c73e-e1dd-4f3c-9bad-866adba8a427] Running
	I0816 22:26:45.971046  238595 system_pods.go:126] duration metric: took 202.387682ms to wait for k8s-apps to be running ...
	I0816 22:26:45.971061  238595 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:26:45.971104  238595 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:26:46.023089  238595 system_svc.go:56] duration metric: took 52.020591ms WaitForService to wait for kubelet.
	I0816 22:26:46.023116  238595 kubeadm.go:547] duration metric: took 6.587748491s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:26:46.023141  238595 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:26:46.168888  238595 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:26:46.168915  238595 node_conditions.go:123] node cpu capacity is 8
	I0816 22:26:46.168933  238595 node_conditions.go:105] duration metric: took 145.786239ms to run NodePressure ...
	I0816 22:26:46.168945  238595 start.go:231] waiting for startup goroutines ...
	I0816 22:26:46.211558  238595 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0816 22:26:46.214728  238595 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20210816221939-6487" cluster and "default" namespace by default
	I0816 22:26:45.495975  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:47.496653  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:49.995957  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:52.496048  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:54.204913  240293 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.057884699s)
	I0816 22:26:54.204974  240293 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:26:54.214048  240293 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 22:26:54.214110  240293 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:26:54.236967  240293 cri.go:76] found id: ""
	I0816 22:26:54.237019  240293 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:26:54.243553  240293 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0816 22:26:54.243606  240293 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:26:54.249971  240293 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:26:54.250416  240293 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0816 22:26:54.516364  240293 out.go:204]   - Generating certificates and keys ...
	I0816 22:26:55.249703  240293 out.go:204]   - Booting up control plane ...
	I0816 22:26:54.996103  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:57.495660  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:59.495743  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:01.995335  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:03.995379  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:05.995637  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:08.496092  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:09.298595  240293 out.go:204]   - Configuring RBAC rules ...
	I0816 22:27:09.713304  240293 cni.go:93] Creating CNI manager for ""
	I0816 22:27:09.713327  240293 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:27:09.715227  240293 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 22:27:09.715277  240293 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0816 22:27:09.718863  240293 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0816 22:27:09.718885  240293 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0816 22:27:09.731677  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 22:27:09.962283  240293 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:27:09.962350  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:09.962373  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=embed-certs-20210816221913-6487 minikube.k8s.io/updated_at=2021_08_16T22_27_09_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:10.060642  240293 ops.go:34] apiserver oom_adj: -16
	I0816 22:27:10.060723  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:10.995882  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:13.495974  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:10.633246  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:11.133139  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:11.633557  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:12.133518  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:12.633029  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:13.132949  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:13.632656  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:14.133534  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:14.632964  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:15.133130  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Mon 2021-08-16 22:21:04 UTC, end at Mon 2021-08-16 22:27:16 UTC. --
	Aug 16 22:26:42 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:42.769060255Z" level=info msg="Image k8s.gcr.io/echoserver:1.4 not found" id=1fe52a26-1204-4d66-b5e1-f86108586e52 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:42 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:42.769470375Z" level=info msg="Pulling image: k8s.gcr.io/echoserver:1.4" id=75b240ca-01e9-46c7-b9fa-c328be0906b7 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 16 22:26:42 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:42.771719747Z" level=info msg="Trying to access \"k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:26:42 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:42.971348298Z" level=info msg="Trying to access \"k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:26:47 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:47.829806594Z" level=info msg="Pulled image: k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" id=75b240ca-01e9-46c7-b9fa-c328be0906b7 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 16 22:26:47 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:47.830645861Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=1050cdf7-3bdd-405e-9429-2f7f716bd3d1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:47 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:47.832028399Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=1050cdf7-3bdd-405e-9429-2f7f716bd3d1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:47 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:47.832794459Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-swsbn/dashboard-metrics-scraper" id=fc97aa62-5e94-4aa7-b1cb-456d16eac579 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:26:47 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:47.997514599Z" level=info msg="Created container afde3610c9869490a7f3366a1ce25fb4e93b0b2dc25e4bf4ad303628782ef94a: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-swsbn/dashboard-metrics-scraper" id=fc97aa62-5e94-4aa7-b1cb-456d16eac579 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:26:47 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:47.998036077Z" level=info msg="Starting container: afde3610c9869490a7f3366a1ce25fb4e93b0b2dc25e4bf4ad303628782ef94a" id=b61fa1f3-5a93-4f75-911d-40df0a61ef2d name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:26:48 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:48.021526225Z" level=info msg="Started container afde3610c9869490a7f3366a1ce25fb4e93b0b2dc25e4bf4ad303628782ef94a: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-swsbn/dashboard-metrics-scraper" id=b61fa1f3-5a93-4f75-911d-40df0a61ef2d name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:26:48 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:48.549395063Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=ea8d175d-831e-4d16-9099-2fcda88e5b1f name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:48 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:48.551268934Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ea8d175d-831e-4d16-9099-2fcda88e5b1f name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:48 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:48.551865308Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=bebb2ae6-c93f-4d46-9987-5157ce55f6f5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:48 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:48.553362492Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=bebb2ae6-c93f-4d46-9987-5157ce55f6f5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:48 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:48.554088163Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-swsbn/dashboard-metrics-scraper" id=13d383a2-756f-46d4-acf2-b4cbccf52a39 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:26:48 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:48.703445367Z" level=info msg="Created container 8ab6c5882403dd54d5afc307dfb53f78df4f2f9a172130291e0fde8b2dd0d34f: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-swsbn/dashboard-metrics-scraper" id=13d383a2-756f-46d4-acf2-b4cbccf52a39 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:26:48 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:48.703981461Z" level=info msg="Starting container: 8ab6c5882403dd54d5afc307dfb53f78df4f2f9a172130291e0fde8b2dd0d34f" id=42cb85d5-f3e0-4da6-845b-b75473d68959 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:26:48 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:48.728797601Z" level=info msg="Started container 8ab6c5882403dd54d5afc307dfb53f78df4f2f9a172130291e0fde8b2dd0d34f: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-swsbn/dashboard-metrics-scraper" id=42cb85d5-f3e0-4da6-845b-b75473d68959 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:26:49 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:49.552835458Z" level=info msg="Removing container: afde3610c9869490a7f3366a1ce25fb4e93b0b2dc25e4bf4ad303628782ef94a" id=5545679f-29e2-4bba-94a8-6485ead42087 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 16 22:26:49 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:49.588498443Z" level=info msg="Removed container afde3610c9869490a7f3366a1ce25fb4e93b0b2dc25e4bf4ad303628782ef94a: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-swsbn/dashboard-metrics-scraper" id=5545679f-29e2-4bba-94a8-6485ead42087 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 16 22:26:53 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:53.505570038Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=75b55660-3915-443e-abfa-132b2d86d7eb name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:53 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:53.505884677Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=75b55660-3915-443e-abfa-132b2d86d7eb name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:53 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:53.506439374Z" level=info msg="Pulling image: fake.domain/k8s.gcr.io/echoserver:1.4" id=af9c8952-fa83-478f-936b-c1c62b2d5de8 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 16 22:26:53 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:53.517004589Z" level=info msg="Trying to access \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID
	8ab6c5882403d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   28 seconds ago      Exited              dashboard-metrics-scraper   1                   5c3713ecb972b
	79f989d1f87df       9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db   34 seconds ago      Running             kubernetes-dashboard        0                   4d3db69d8b8ff
	cc25061645005       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   34 seconds ago      Exited              storage-provisioner         0                   c9fffbe595965
	9913deebfe52a       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   35 seconds ago      Running             coredns                     0                   c72232d083282
	9b088643e3470       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb   37 seconds ago      Running             kindnet-cni                 0                   9ed325eb35d3f
	4014814f8de6c       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   37 seconds ago      Running             kube-proxy                  0                   3bf6028b9f9f4
	424818e3cd136       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   59 seconds ago      Running             etcd                        0                   a56e9bdba8779
	8fda9c602da54       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   59 seconds ago      Running             kube-controller-manager     0                   07858669fc1c0
	5f1c7a968a713       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   59 seconds ago      Running             kube-scheduler              0                   5f7ab1a5d2a8b
	ed8f2fd04b802       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   59 seconds ago      Running             kube-apiserver              0                   69b476c5677b4
	
	* 
	* ==> coredns [9913deebfe52a0cf2858d139bbde3a6115b0f3e565c62fc3705dbf7a8fe23971] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000003] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +0.463957] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000002] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +0.895921] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000003] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +1.763890] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000003] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +0.831977] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-6a14296a1513
	[  +0.000003] ll header: 00000000: 02 42 33 e7 ff 29 02 42 c0 a8 31 02 08 00        .B3..).B..1...
	[  +2.811776] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000002] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +2.832077] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000003] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +4.335384] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000003] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[Aug16 22:27] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000003] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[ +13.663740] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth55ef9b3c
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 06 37 a8 8c 4d 9e 08 06        .......7..M...
	[  +2.163880] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethb864e10f
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 26 f5 50 a9 1a cc 08 06        ......&.P.....
	[  +0.707561] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth9c8775f6
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 4a 1b 78 c1 d0 58 08 06        ......J.x..X..
	[  +0.000675] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth6f717d76
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff aa 3a da 18 32 b9 08 06        .......:..2...
	
	* 
	* ==> etcd [424818e3cd13674e399c36ecfdfa799fadb4897a5ee828f9351784d6deaf5547] <==
	* raft2021/08/16 22:26:17 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-16 22:26:17.830341 W | auth: simple token is not cryptographically signed
	2021-08-16 22:26:17.835242 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	2021-08-16 22:26:17.835370 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2021/08/16 22:26:17 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-16 22:26:17.835811 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2021-08-16 22:26:17.837782 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-16 22:26:17.837892 I | embed: listening for peers on 192.168.49.2:2380
	2021-08-16 22:26:17.837943 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/16 22:26:17 INFO: aec36adc501070cc is starting a new election at term 1
	raft2021/08/16 22:26:17 INFO: aec36adc501070cc became candidate at term 2
	raft2021/08/16 22:26:17 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2021/08/16 22:26:17 INFO: aec36adc501070cc became leader at term 2
	raft2021/08/16 22:26:17 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2021-08-16 22:26:17.928543 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-16 22:26:17.929278 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-16 22:26:17.929338 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-16 22:26:17.929373 I | embed: ready to serve client requests
	2021-08-16 22:26:17.929478 I | embed: ready to serve client requests
	2021-08-16 22:26:17.930592 I | etcdserver: published {Name:default-k8s-different-port-20210816221939-6487 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-16 22:26:17.932644 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-16 22:26:17.933111 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-16 22:26:36.227188 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:26:44.854469 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:26:54.854574 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  22:27:29 up  1:06,  0 users,  load average: 2.38, 2.58, 2.31
	Linux default-k8s-different-port-20210816221939-6487 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [ed8f2fd04b8028dc3b19dd83f0b06d817cfad8c6bb15b23ffd738c0796981129] <==
	* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0816 22:26:43.340347       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 22:26:55.611668       1 client.go:360] parsed scheme: "passthrough"
	I0816 22:26:55.611709       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 22:26:55.611717       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0816 22:27:08.282436       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
	E0816 22:27:08.282529       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0816 22:27:08.283730       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0816 22:27:08.284884       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0816 22:27:08.286036       1 trace.go:205] Trace[926519438]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (16-Aug-2021 22:26:58.286) (total time: 9999ms):
	Trace[926519438]: [9.999204577s] [9.999204577s] END
	I0816 22:27:21.046690       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:27:21.135201       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:27:28.440193       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:27:29.229222       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:27:29.719534       1 trace.go:205] Trace[1377374562]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (16-Aug-2021 22:27:16.910) (total time: 12809ms):
	Trace[1377374562]: [12.809370082s] [12.809370082s] END
	I0816 22:27:29.719588       1 trace.go:205] Trace[146602418]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (16-Aug-2021 22:26:59.730) (total time: 29989ms):
	Trace[146602418]: [29.989034377s] [29.989034377s] END
	E0816 22:27:29.719597       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.statusError{state:impl.MessageState{NoUnkeyedLiterals:pragma.NoUnkeyedLiterals{}, DoNotCompare:pragma.DoNotCompare{}, DoNotCopy:pragma.DoNotCopy{}, atomicMessageInfo:(*impl.MessageInfo)(nil)}, sizeCache:0, unknownFields:[]uint8(nil), Code:14, Message:"transport is closing", Details:[]*anypb.Any(nil)}: rpc error: code = Unavailable desc = transport is closing
	E0816 22:27:29.719625       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.statusError{state:impl.MessageState{NoUnkeyedLiterals:pragma.NoUnkeyedLiterals{}, DoNotCompare:pragma.DoNotCompare{}, DoNotCopy:pragma.DoNotCopy{}, atomicMessageInfo:(*impl.MessageInfo)(nil)}, sizeCache:0, unknownFields:[]uint8(nil), Code:14, Message:"transport is closing", Details:[]*anypb.Any(nil)}: rpc error: code = Unavailable desc = transport is closing
	I0816 22:27:29.719834       1 trace.go:205] Trace[530678070]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (16-Aug-2021 22:27:16.910) (total time: 12809ms):
	Trace[530678070]: [12.809706514s] [12.809706514s] END
	I0816 22:27:29.721033       1 trace.go:205] Trace[1580638936]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (16-Aug-2021 22:26:59.730) (total time: 29990ms):
	Trace[1580638936]: [29.990482228s] [29.990482228s] END
	
	* 
	* ==> kube-controller-manager [8fda9c602da541a3efd623062eb9b12546ce7f7dbe30779b8dcc048fafb8e49d] <==
	* I0816 22:26:38.946281       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-mnqvq"
	I0816 22:26:39.192005       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0816 22:26:39.192028       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0816 22:26:39.248799       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0816 22:26:40.629417       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-7c784ccb57 to 1"
	I0816 22:26:40.717505       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0816 22:26:40.825903       1 replica_set.go:532] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0816 22:26:40.917674       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-lfkmq"
	I0816 22:26:41.132391       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0816 22:26:41.137636       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:26:41.144286       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	E0816 22:26:41.214906       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:26:41.219957       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:26:41.220034       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:26:41.220235       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:26:41.225299       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:26:41.225482       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:26:41.225549       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:26:41.229857       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:26:41.229875       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:26:41.316688       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-jmcw9"
	I0816 22:26:41.316805       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-swsbn"
	I0816 22:26:43.693464       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	E0816 22:27:08.842687       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:27:09.266493       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [4014814f8de6c11880e36f0bb888b4734c3a928cbb02aaf16da299c541e2a01d] <==
	* I0816 22:26:39.398397       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0816 22:26:39.398449       1 server_others.go:140] Detected node IP 192.168.49.2
	W0816 22:26:39.398479       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0816 22:26:39.425786       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0816 22:26:39.425822       1 server_others.go:212] Using iptables Proxier.
	I0816 22:26:39.425835       1 server_others.go:219] creating dualStackProxier for iptables.
	W0816 22:26:39.425844       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0816 22:26:39.426182       1 server.go:643] Version: v1.21.3
	I0816 22:26:39.426941       1 config.go:315] Starting service config controller
	I0816 22:26:39.427020       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:26:39.427052       1 config.go:224] Starting endpoint slice config controller
	I0816 22:26:39.427070       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0816 22:26:39.430581       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0816 22:26:39.431603       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:26:39.528071       1 shared_informer.go:247] Caches are synced for service config 
	I0816 22:26:39.528209       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [5f1c7a968a7136b41987a6c94cc5f792b8425e5a3aacaefae69394da84ed0a4c] <==
	* I0816 22:26:22.115874       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 22:26:22.115950       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 22:26:22.120066       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0816 22:26:22.120156       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0816 22:26:22.132709       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:26:22.132945       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:26:22.133062       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:26:22.133181       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:26:22.133275       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:26:22.133389       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:26:22.133492       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:26:22.133485       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:26:22.133579       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:26:22.133649       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:26:22.133719       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:26:22.135285       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 22:26:22.139706       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:26:22.139865       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:26:23.043228       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:26:23.213299       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:26:23.263893       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:26:23.268967       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:26:23.313677       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:26:23.313779       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0816 22:26:23.616159       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:21:04 UTC, end at Mon 2021-08-16 22:27:29 UTC. --
	Aug 16 22:26:41 default-k8s-different-port-20210816221939-6487 kubelet[5490]: I0816 22:26:41.338564    5490 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0a7f8825-8975-4257-a108-92592ad8f017-tmp-volume\") pod \"kubernetes-dashboard-6fcdf4f6d-jmcw9\" (UID: \"0a7f8825-8975-4257-a108-92592ad8f017\") "
	Aug 16 22:26:41 default-k8s-different-port-20210816221939-6487 kubelet[5490]: I0816 22:26:41.338647    5490 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97t8q\" (UniqueName: \"kubernetes.io/projected/0a7f8825-8975-4257-a108-92592ad8f017-kube-api-access-97t8q\") pod \"kubernetes-dashboard-6fcdf4f6d-jmcw9\" (UID: \"0a7f8825-8975-4257-a108-92592ad8f017\") "
	Aug 16 22:26:41 default-k8s-different-port-20210816221939-6487 kubelet[5490]: I0816 22:26:41.338778    5490 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4a774169-234c-44ee-b3f8-5b8727ef0b8d-tmp-volume\") pod \"dashboard-metrics-scraper-8685c45546-swsbn\" (UID: \"4a774169-234c-44ee-b3f8-5b8727ef0b8d\") "
	Aug 16 22:26:41 default-k8s-different-port-20210816221939-6487 kubelet[5490]: I0816 22:26:41.338841    5490 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcglf\" (UniqueName: \"kubernetes.io/projected/4a774169-234c-44ee-b3f8-5b8727ef0b8d-kube-api-access-wcglf\") pod \"dashboard-metrics-scraper-8685c45546-swsbn\" (UID: \"4a774169-234c-44ee-b3f8-5b8727ef0b8d\") "
	Aug 16 22:26:42 default-k8s-different-port-20210816221939-6487 kubelet[5490]: E0816 22:26:42.154735    5490 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:26:42 default-k8s-different-port-20210816221939-6487 kubelet[5490]: E0816 22:26:42.154825    5490 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:26:42 default-k8s-different-port-20210816221939-6487 kubelet[5490]: E0816 22:26:42.155003    5490 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-v2m4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{
Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,Vol
umeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-lfkmq_kube-system(d9309c70-8cf5-4fdc-a79f-1c85f9ceda55): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host
	Aug 16 22:26:42 default-k8s-different-port-20210816221939-6487 kubelet[5490]: E0816 22:26:42.155077    5490 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-lfkmq" podUID=d9309c70-8cf5-4fdc-a79f-1c85f9ceda55
	Aug 16 22:26:42 default-k8s-different-port-20210816221939-6487 kubelet[5490]: E0816 22:26:42.536058    5490 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-lfkmq" podUID=d9309c70-8cf5-4fdc-a79f-1c85f9ceda55
	Aug 16 22:26:48 default-k8s-different-port-20210816221939-6487 kubelet[5490]: I0816 22:26:48.548933    5490 scope.go:111] "RemoveContainer" containerID="afde3610c9869490a7f3366a1ce25fb4e93b0b2dc25e4bf4ad303628782ef94a"
	Aug 16 22:26:49 default-k8s-different-port-20210816221939-6487 kubelet[5490]: I0816 22:26:49.551741    5490 scope.go:111] "RemoveContainer" containerID="afde3610c9869490a7f3366a1ce25fb4e93b0b2dc25e4bf4ad303628782ef94a"
	Aug 16 22:26:49 default-k8s-different-port-20210816221939-6487 kubelet[5490]: I0816 22:26:49.551872    5490 scope.go:111] "RemoveContainer" containerID="8ab6c5882403dd54d5afc307dfb53f78df4f2f9a172130291e0fde8b2dd0d34f"
	Aug 16 22:26:49 default-k8s-different-port-20210816221939-6487 kubelet[5490]: E0816 22:26:49.552267    5490 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-swsbn_kubernetes-dashboard(4a774169-234c-44ee-b3f8-5b8727ef0b8d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-swsbn" podUID=4a774169-234c-44ee-b3f8-5b8727ef0b8d
	Aug 16 22:26:50 default-k8s-different-port-20210816221939-6487 kubelet[5490]: I0816 22:26:50.554373    5490 scope.go:111] "RemoveContainer" containerID="8ab6c5882403dd54d5afc307dfb53f78df4f2f9a172130291e0fde8b2dd0d34f"
	Aug 16 22:26:50 default-k8s-different-port-20210816221939-6487 kubelet[5490]: E0816 22:26:50.554614    5490 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-swsbn_kubernetes-dashboard(4a774169-234c-44ee-b3f8-5b8727ef0b8d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-swsbn" podUID=4a774169-234c-44ee-b3f8-5b8727ef0b8d
	Aug 16 22:26:50 default-k8s-different-port-20210816221939-6487 kubelet[5490]: E0816 22:26:50.943560    5490 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/05d52cd72bdcf94c00d8b611c45d4aeff6bddfed7127e78286ad1e981b86d2f2/docker/05d52cd72bdcf94c00d8b611c45d4aeff6bddfed7127e78286ad1e981b86d2f2\": RecentStats: unable to find data in memory cache]"
	Aug 16 22:26:51 default-k8s-different-port-20210816221939-6487 kubelet[5490]: I0816 22:26:51.555665    5490 scope.go:111] "RemoveContainer" containerID="8ab6c5882403dd54d5afc307dfb53f78df4f2f9a172130291e0fde8b2dd0d34f"
	Aug 16 22:26:51 default-k8s-different-port-20210816221939-6487 kubelet[5490]: E0816 22:26:51.555949    5490 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-swsbn_kubernetes-dashboard(4a774169-234c-44ee-b3f8-5b8727ef0b8d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-swsbn" podUID=4a774169-234c-44ee-b3f8-5b8727ef0b8d
	Aug 16 22:26:53 default-k8s-different-port-20210816221939-6487 kubelet[5490]: E0816 22:26:53.521420    5490 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:26:53 default-k8s-different-port-20210816221939-6487 kubelet[5490]: E0816 22:26:53.521460    5490 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:26:53 default-k8s-different-port-20210816221939-6487 kubelet[5490]: E0816 22:26:53.521586    5490 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-v2m4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{
Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,Vol
umeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-lfkmq_kube-system(d9309c70-8cf5-4fdc-a79f-1c85f9ceda55): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host
	Aug 16 22:26:53 default-k8s-different-port-20210816221939-6487 kubelet[5490]: E0816 22:26:53.521628    5490 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-lfkmq" podUID=d9309c70-8cf5-4fdc-a79f-1c85f9ceda55
	Aug 16 22:26:57 default-k8s-different-port-20210816221939-6487 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 16 22:26:57 default-k8s-different-port-20210816221939-6487 systemd[1]: kubelet.service: Succeeded.
	Aug 16 22:26:57 default-k8s-different-port-20210816221939-6487 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [79f989d1f87dfbbfdcfeb997afdb759885178f62c07cdaf29baf001761967c6d] <==
	* 2021/08/16 22:26:42 Starting overwatch
	2021/08/16 22:26:42 Using namespace: kubernetes-dashboard
	2021/08/16 22:26:42 Using in-cluster config to connect to apiserver
	2021/08/16 22:26:42 Using secret token for csrf signing
	2021/08/16 22:26:42 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/16 22:26:42 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/16 22:26:42 Successful initial request to the apiserver, version: v1.21.3
	2021/08/16 22:26:42 Generating JWE encryption key
	2021/08/16 22:26:42 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/16 22:26:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/16 22:26:43 Initializing JWE encryption key from synchronized object
	2021/08/16 22:26:43 Creating in-cluster Sidecar client
	2021/08/16 22:26:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:26:43 Serving insecurely on HTTP port: 9090
	
	* 
	* ==> storage-provisioner [cc250616450056102bb06e4b2a7a294752cbf8992bdf38ffd039f70b9bf5d938] <==
	* k8s.io/client-go/util/workqueue.(*Type).Get(0xc000181ce0, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc0001aef00, 0x18e5530, 0xc00004a100, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000300100)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000300100, 0x18b3d60, 0xc0002741e0, 0x1, 0xc00010a1e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000300100, 0x3b9aca00, 0x0, 0x1, 0xc00010a1e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc000300100, 0x3b9aca00, 0xc00010a1e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	goroutine 92 [runnable]:
	k8s.io/client-go/tools/record.(*recorderImpl).generateEvent.func1(0xc0001a8940, 0xc0001ae000)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/tools/record/event.go:341
	created by k8s.io/client-go/tools/record.(*recorderImpl).generateEvent
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/tools/record/event.go:341 +0x3b7
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:27:29.728241  273628 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server: rpc error: code = Unavailable desc = transport is closing
	 output: "\n** stderr ** \nError from server: rpc error: code = Unavailable desc = transport is closing\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect default-k8s-different-port-20210816221939-6487
helpers_test.go:236: (dbg) docker inspect default-k8s-different-port-20210816221939-6487:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "05d52cd72bdcf94c00d8b611c45d4aeff6bddfed7127e78286ad1e981b86d2f2",
	        "Created": "2021-08-16T22:19:40.83769465Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 238890,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-16T22:21:04.019065668Z",
	            "FinishedAt": "2021-08-16T22:21:01.776663505Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/05d52cd72bdcf94c00d8b611c45d4aeff6bddfed7127e78286ad1e981b86d2f2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/05d52cd72bdcf94c00d8b611c45d4aeff6bddfed7127e78286ad1e981b86d2f2/hostname",
	        "HostsPath": "/var/lib/docker/containers/05d52cd72bdcf94c00d8b611c45d4aeff6bddfed7127e78286ad1e981b86d2f2/hosts",
	        "LogPath": "/var/lib/docker/containers/05d52cd72bdcf94c00d8b611c45d4aeff6bddfed7127e78286ad1e981b86d2f2/05d52cd72bdcf94c00d8b611c45d4aeff6bddfed7127e78286ad1e981b86d2f2-json.log",
	        "Name": "/default-k8s-different-port-20210816221939-6487",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20210816221939-6487:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20210816221939-6487",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/30e36cb4e5ddf168069308e3752ddd464c4c96b2e080fcc484bb7f568ecc42a3-init/diff:/var/lib/docker/overlay2/26ccb05b45c39eb8fb9a222c35182ae0b2281818311893cc1c820d93f21711ae/diff:/var/lib/docker/overlay2/cd717b1332cc33945d2b9d777346faa34d6bdfa47023daf36a4373532eccf421/diff:/var/lib/docker/overlay2/d9b45e6cc99a52e8f81dfe312587e9103ceac6c138634c90ec0d647cd90dcdc6/diff:/var/lib/docker/overlay2/253a269adcaa7871d92fb351ceb4d32421104cd62f741ba1c63d8e3f2e19a0b5/diff:/var/lib/docker/overlay2/42adedc2a706910fdba591e62cbfab57dd03d85b882f6eef38c3e131a0c52bea/diff:/var/lib/docker/overlay2/77ed2008192a16e91da5c6b9ccc6e71cea338a7cb24d331dc695e5f74fb04573/diff:/var/lib/docker/overlay2/29dccf868a296ce0889c929159fd3ac0dcdc00ba6e2ef864b056335954ffab4d/diff:/var/lib/docker/overlay2/592818f0a9ab8c99c2438562d9c5a479083cb08f41ad30ddefa9b493a8098150/diff:/var/lib/docker/overlay2/05fa44df096fbdb60f935bef69e952369e10cd8f1adc570c170c1f08d09743f7/diff:/var/lib/docker/overlay2/65cf48
1c471f231a663edc6331272317a851b0fd3395b2d6be43899f9b5443f9/diff:/var/lib/docker/overlay2/86a49202a7a2abea1a7a57c52f2e19cf991e9319b11dcf7f4105892413e0096e/diff:/var/lib/docker/overlay2/6f1a2a1805e90af512eef163f484935d57b0a0a72a65b5775c38efb5fe937d13/diff:/var/lib/docker/overlay2/f514eb992d78367550d2ce1c020771f984eff047302d6af3dcd71172270c0087/diff:/var/lib/docker/overlay2/cdae383a0302a2ed835985028caad813d6e16c7caf0e146005b47a4a1e94369b/diff:/var/lib/docker/overlay2/1fe02987c4f18db439e026431c44d1b8b233da766c4dcd331a4c20fd91e88748/diff:/var/lib/docker/overlay2/28bb6b828668bb682f3523f9f71fbd429ef6180cd98455625ea014ef4e532b77/diff:/var/lib/docker/overlay2/2878ae01f9dccb219097c2fc5873be677fe8a70e2bd936d25f018483a3d6c1a3/diff:/var/lib/docker/overlay2/da86efa9bad2d8b46c88440b1c4864add391f7cc9ee07d4227dd37d76c9ffd85/diff:/var/lib/docker/overlay2/93bf17ee1f2be70c70aa38b3aea9daa3a62e189181d1aa56acc52f5cf90f7436/diff:/var/lib/docker/overlay2/e8db603cd7ef789cc310bf7782375ee624d80ac0bceb3a3075900e67540c152a/diff:/var/lib/d
ocker/overlay2/1ba1c50d72480f49b835b34db6c3b21bccdf6530a8201631099a7e287dbf94a6/diff:/var/lib/docker/overlay2/2eeb0aecf508bc4a797d240b330bfa5b54b84a085737a96dad9faf9ea129d4ce/diff:/var/lib/docker/overlay2/40ba0dc970df76cce520350594f86384e121b47a190106559752e8f7cf138540/diff:/var/lib/docker/overlay2/8b80e2d0e53fb136919fd9ca7d4d198a92c324a19aa1827fcd672a3aa49a0dcc/diff:/var/lib/docker/overlay2/bd1d10c9a8876242ad61e3841c439177c3ae3a1967e5b053729107676134c622/diff:/var/lib/docker/overlay2/1f65ee5880c0ce6e027f2e72118cb956fcdcba9bb0806d98de6532bcbdfac6dd/diff:/var/lib/docker/overlay2/8bde131df5f26fa8ad65d4ddcb9ddb84de567427ffeb4aee76487489e530ef77/diff:/var/lib/docker/overlay2/73a68cda04f65298e5bd2581f0f16a8de96cd42145968d5c39b0c8ae3228a423/diff:/var/lib/docker/overlay2/0acfb650c0dfd3b9df005fe48f305b155d61d21914271c258e95bf42636e9bf8/diff:/var/lib/docker/overlay2/33e44b42ef88ffe285cb67e240248a629167dd1c83362d51a67679cbd866c30b/diff:/var/lib/docker/overlay2/c71ddf9e0475e4a3987bf3d616723e9d055a79c4bb6d92cf34fc67be2d9
c5134/diff:/var/lib/docker/overlay2/faece61d8cf45bab42cd499befcdb8d4548229311b35600cced068a3ce186025/diff:/var/lib/docker/overlay2/5e45db6ab7620f8baf25ff5ff01c806cd64575cdc89e5bb2965a25800093d5f9/diff:/var/lib/docker/overlay2/7324fe47bf39776cd61539f09f63fbb6f5e17aedc985bcc48bc12cdaaff4c4e4/diff:/var/lib/docker/overlay2/98a50fa9fa33654b2ed4ce8f7118f04d7004260519abca0d403b2ab337e9c273/diff:/var/lib/docker/overlay2/0e38707c109f3f1c54c7190021d525f898a86b07f2e51c86dc3e6d916eff3ed7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/30e36cb4e5ddf168069308e3752ddd464c4c96b2e080fcc484bb7f568ecc42a3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/30e36cb4e5ddf168069308e3752ddd464c4c96b2e080fcc484bb7f568ecc42a3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/30e36cb4e5ddf168069308e3752ddd464c4c96b2e080fcc484bb7f568ecc42a3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20210816221939-6487",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20210816221939-6487/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20210816221939-6487",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20210816221939-6487",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20210816221939-6487",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1c01efb1697beacb1b10018f6698e40f168542a6fce479e8f3b57d471dcbb711",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32954"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32953"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32950"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32952"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32951"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1c01efb1697b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20210816221939-6487": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "05d52cd72bdc"
	                    ],
	                    "NetworkID": "6a14296a15132239c81b36a2db1275d8af1a3ea741aaec5494005057cf547d13",
	                    "EndpointID": "70c88936941dd9b3d9600a590a64a27b951033e05cca2ed4a34f93adca901bfd",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210816221939-6487 -n default-k8s-different-port-20210816221939-6487

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210816221939-6487 -n default-k8s-different-port-20210816221939-6487: exit status 2 (15.785411834s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:27:45.861884  275362 status.go:422] Error apiserver status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20210816221939-6487 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p default-k8s-different-port-20210816221939-6487 logs -n 25: exit status 110 (1m0.851833358s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p pause-20210816221349-6487                               | pause-20210816221349-6487                      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:38 UTC | Mon, 16 Aug 2021 22:19:38 UTC |
	| delete  | -p                                                         | disable-driver-mounts-20210816221938-6487      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:38 UTC | Mon, 16 Aug 2021 22:19:39 UTC |
	|         | disable-driver-mounts-20210816221938-6487                  |                                                |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:39 UTC | Mon, 16 Aug 2021 22:20:32 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:40 UTC | Mon, 16 Aug 2021 22:20:41 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:13 UTC | Mon, 16 Aug 2021 22:20:45 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:53 UTC | Mon, 16 Aug 2021 22:20:54 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:41 UTC | Mon, 16 Aug 2021 22:21:02 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:02 UTC | Mon, 16 Aug 2021 22:21:02 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| stop    | -p                                                         | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:54 UTC | Mon, 16 Aug 2021 22:21:15 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:15 UTC | Mon, 16 Aug 2021 22:21:15 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210816221555-6487                          | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:18:20 UTC | Mon, 16 Aug 2021 22:24:11 UTC |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:26 UTC | Mon, 16 Aug 2021 22:24:26 UTC |
	|         | no-preload-20210816221555-6487                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| -p      | no-preload-20210816221555-6487                             | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:29 UTC | Mon, 16 Aug 2021 22:24:29 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | no-preload-20210816221555-6487                             | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:30 UTC | Mon, 16 Aug 2021 22:24:31 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:32 UTC | Mon, 16 Aug 2021 22:24:35 UTC |
	|         | no-preload-20210816221555-6487                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:36 UTC | Mon, 16 Aug 2021 22:24:36 UTC |
	|         | no-preload-20210816221555-6487                             |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20210816222436-6487 --memory=2200            | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:36 UTC | Mon, 16 Aug 2021 22:25:25 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:25 UTC | Mon, 16 Aug 2021 22:25:25 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:25 UTC | Mon, 16 Aug 2021 22:25:46 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:46 UTC | Mon, 16 Aug 2021 22:25:46 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20210816222436-6487 --memory=2200            | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:46 UTC | Mon, 16 Aug 2021 22:26:11 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:26:12 UTC | Mon, 16 Aug 2021 22:26:12 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:02 UTC | Mon, 16 Aug 2021 22:26:46 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:26:56 UTC | Mon, 16 Aug 2021 22:26:57 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:15 UTC | Mon, 16 Aug 2021 22:27:35 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:25:46
	Running on machine: debian-jenkins-agent-13
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:25:46.856773  262957 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:25:46.856848  262957 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:25:46.856858  262957 out.go:311] Setting ErrFile to fd 2...
	I0816 22:25:46.856861  262957 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:25:46.856963  262957 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:25:46.857212  262957 out.go:305] Setting JSON to false
	I0816 22:25:46.893957  262957 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-13","uptime":3914,"bootTime":1629148833,"procs":365,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:25:46.894067  262957 start.go:121] virtualization: kvm guest
	I0816 22:25:46.896379  262957 out.go:177] * [newest-cni-20210816222436-6487] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:25:46.897973  262957 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:25:46.896522  262957 notify.go:169] Checking for updates...
	I0816 22:25:46.899468  262957 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:25:46.900988  262957 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:25:46.902492  262957 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:25:46.902900  262957 config.go:177] Loaded profile config "newest-cni-20210816222436-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0816 22:25:46.903274  262957 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:25:46.950656  262957 docker.go:132] docker version: linux-19.03.15
	I0816 22:25:46.950732  262957 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:25:47.034524  262957 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-16 22:25:46.986320519 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 22:25:47.034654  262957 docker.go:244] overlay module found
	I0816 22:25:47.037282  262957 out.go:177] * Using the docker driver based on existing profile
	I0816 22:25:47.037307  262957 start.go:278] selected driver: docker
	I0816 22:25:47.037313  262957 start.go:751] validating driver "docker" against &{Name:newest-cni-20210816222436-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816222436-6487 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true
extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:25:47.037417  262957 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0816 22:25:47.037459  262957 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:25:47.037480  262957 out.go:242] ! Your cgroup does not allow setting memory.
	I0816 22:25:47.039083  262957 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:25:47.040150  262957 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:25:47.119162  262957 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-16 22:25:47.075605257 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0816 22:25:47.119274  262957 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:25:47.119298  262957 out.go:242] ! Your cgroup does not allow setting memory.
	I0816 22:25:47.121212  262957 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:25:47.121330  262957 start_flags.go:716] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0816 22:25:47.121355  262957 cni.go:93] Creating CNI manager for ""
	I0816 22:25:47.121364  262957 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:25:47.121376  262957 start_flags.go:277] config:
	{Name:newest-cni-20210816222436-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816222436-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m
0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:25:47.123081  262957 out.go:177] * Starting control plane node newest-cni-20210816222436-6487 in cluster newest-cni-20210816222436-6487
	I0816 22:25:47.123113  262957 cache.go:117] Beginning downloading kic base image for docker with crio
	I0816 22:25:47.124788  262957 out.go:177] * Pulling base image ...
	I0816 22:25:47.124814  262957 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0816 22:25:47.124838  262957 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0816 22:25:47.124853  262957 cache.go:56] Caching tarball of preloaded images
	I0816 22:25:47.124910  262957 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0816 22:25:47.125039  262957 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 22:25:47.125058  262957 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on crio
	I0816 22:25:47.125170  262957 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/config.json ...
	I0816 22:25:47.212531  262957 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0816 22:25:47.212557  262957 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0816 22:25:47.212577  262957 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:25:47.212610  262957 start.go:313] acquiring machines lock for newest-cni-20210816222436-6487: {Name:mkd90dd1df90e2f23e61f524a3ae6e1a65dd1b39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:47.212710  262957 start.go:317] acquired machines lock for "newest-cni-20210816222436-6487" in 80.626µs
	I0816 22:25:47.212739  262957 start.go:93] Skipping create...Using existing machine configuration
	I0816 22:25:47.212748  262957 fix.go:55] fixHost starting: 
	I0816 22:25:47.212988  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:25:47.251771  262957 fix.go:108] recreateIfNeeded on newest-cni-20210816222436-6487: state=Stopped err=<nil>
	W0816 22:25:47.251798  262957 fix.go:134] unexpected machine state, will restart: <nil>
	I0816 22:25:44.995113  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:46.995369  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:47.650229  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:49.650872  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:47.254057  262957 out.go:177] * Restarting existing docker container for "newest-cni-20210816222436-6487" ...
	I0816 22:25:47.254120  262957 cli_runner.go:115] Run: docker start newest-cni-20210816222436-6487
	I0816 22:25:48.586029  262957 cli_runner.go:168] Completed: docker start newest-cni-20210816222436-6487: (1.33187871s)
	I0816 22:25:48.586111  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:25:48.626755  262957 kic.go:420] container "newest-cni-20210816222436-6487" state is running.
	I0816 22:25:48.627256  262957 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210816222436-6487
	I0816 22:25:48.670009  262957 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/config.json ...
	I0816 22:25:48.670233  262957 machine.go:88] provisioning docker machine ...
	I0816 22:25:48.670255  262957 ubuntu.go:169] provisioning hostname "newest-cni-20210816222436-6487"
	I0816 22:25:48.670309  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:48.711043  262957 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:48.711197  262957 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32969 <nil> <nil>}
	I0816 22:25:48.711217  262957 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20210816222436-6487 && echo "newest-cni-20210816222436-6487" | sudo tee /etc/hostname
	I0816 22:25:48.711815  262957 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48490->127.0.0.1:32969: read: connection reset by peer
	I0816 22:25:49.495358  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:51.496029  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:53.497145  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:51.907195  262957 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20210816222436-6487
	
	I0816 22:25:51.907262  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:51.946396  262957 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:51.946596  262957 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32969 <nil> <nil>}
	I0816 22:25:51.946627  262957 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20210816222436-6487' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20210816222436-6487/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20210816222436-6487' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:25:52.071168  262957 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:25:52.071196  262957 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem ServerCertRemotePath:/e
tc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:25:52.071223  262957 ubuntu.go:177] setting up certificates
	I0816 22:25:52.071234  262957 provision.go:83] configureAuth start
	I0816 22:25:52.071275  262957 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210816222436-6487
	I0816 22:25:52.110550  262957 provision.go:138] copyHostCerts
	I0816 22:25:52.110621  262957 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:25:52.110633  262957 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:25:52.110696  262957 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:25:52.110798  262957 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:25:52.110811  262957 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:25:52.110835  262957 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:25:52.110969  262957 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:25:52.110981  262957 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:25:52.111006  262957 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0816 22:25:52.111059  262957 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20210816222436-6487 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20210816222436-6487]
	I0816 22:25:52.355600  262957 provision.go:172] copyRemoteCerts
	I0816 22:25:52.355664  262957 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:25:52.355720  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:52.396113  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:25:52.486667  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:25:52.503265  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0816 22:25:52.518138  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 22:25:52.533106  262957 provision.go:86] duration metric: configureAuth took 461.862959ms
	I0816 22:25:52.533124  262957 ubuntu.go:193] setting minikube options for container-runtime
	I0816 22:25:52.533292  262957 config.go:177] Loaded profile config "newest-cni-20210816222436-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0816 22:25:52.533391  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:52.573329  262957 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:52.573496  262957 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32969 <nil> <nil>}
	I0816 22:25:52.573517  262957 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 22:25:52.991954  262957 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 22:25:52.991986  262957 machine.go:91] provisioned docker machine in 4.321739549s
	I0816 22:25:52.991996  262957 start.go:267] post-start starting for "newest-cni-20210816222436-6487" (driver="docker")
	I0816 22:25:52.992007  262957 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:25:52.992069  262957 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:25:52.992113  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:53.032158  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:25:53.123013  262957 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:25:53.125495  262957 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0816 22:25:53.125515  262957 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0816 22:25:53.125523  262957 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0816 22:25:53.125528  262957 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0816 22:25:53.125536  262957 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:25:53.125574  262957 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:25:53.125646  262957 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem -> 64872.pem in /etc/ssl/certs
	I0816 22:25:53.125746  262957 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:25:53.131911  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:25:53.149155  262957 start.go:270] post-start completed in 157.141514ms
	I0816 22:25:53.149220  262957 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 22:25:53.149270  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:53.190433  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:25:53.275867  262957 fix.go:57] fixHost completed within 6.063112205s
	I0816 22:25:53.275893  262957 start.go:80] releasing machines lock for "newest-cni-20210816222436-6487", held for 6.063163627s
	I0816 22:25:53.275995  262957 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210816222436-6487
	I0816 22:25:53.317483  262957 ssh_runner.go:149] Run: systemctl --version
	I0816 22:25:53.317538  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:53.317562  262957 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:25:53.317640  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:53.361517  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:25:53.362854  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:25:53.489151  262957 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0816 22:25:53.499402  262957 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0816 22:25:53.507663  262957 docker.go:153] disabling docker service ...
	I0816 22:25:53.507710  262957 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:25:53.515840  262957 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:25:53.523795  262957 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:25:53.582896  262957 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:25:53.644285  262957 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:25:53.653611  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:25:53.665218  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0816 22:25:53.672674  262957 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0816 22:25:53.672699  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0816 22:25:53.680934  262957 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:25:53.686723  262957 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:25:53.686773  262957 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:25:53.693222  262957 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:25:53.698990  262957 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:25:53.756392  262957 ssh_runner.go:149] Run: sudo systemctl start crio
	I0816 22:25:53.765202  262957 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 22:25:53.765252  262957 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:25:53.768154  262957 start.go:413] Will wait 60s for crictl version
	I0816 22:25:53.768197  262957 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:25:53.794195  262957 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0816 22:25:53.794262  262957 ssh_runner.go:149] Run: crio --version
	I0816 22:25:53.852537  262957 ssh_runner.go:149] Run: crio --version
	I0816 22:25:53.912084  262957 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on CRI-O 1.20.3 ...
	I0816 22:25:53.912160  262957 cli_runner.go:115] Run: docker network inspect newest-cni-20210816222436-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 22:25:53.950048  262957 ssh_runner.go:149] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0816 22:25:53.953262  262957 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:25:53.963781  262957 out.go:177]   - kubelet.network-plugin=cni
	I0816 22:25:52.154162  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:54.650414  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:53.965324  262957 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0816 22:25:53.965406  262957 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0816 22:25:53.965459  262957 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:25:53.993612  262957 crio.go:424] all images are preloaded for cri-o runtime.
	I0816 22:25:53.993630  262957 crio.go:333] Images already preloaded, skipping extraction
	I0816 22:25:53.993667  262957 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:25:54.020097  262957 crio.go:424] all images are preloaded for cri-o runtime.
	I0816 22:25:54.020118  262957 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:25:54.020180  262957 ssh_runner.go:149] Run: crio config
	I0816 22:25:54.082979  262957 cni.go:93] Creating CNI manager for ""
	I0816 22:25:54.083003  262957 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:25:54.083013  262957 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0816 22:25:54.083024  262957 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20210816222436-6487 NodeName:newest-cni-20210816222436-6487 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:fa
lse] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:25:54.083168  262957 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "newest-cni-20210816222436-6487"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:25:54.083284  262957 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20210816222436-6487 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816222436-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0816 22:25:54.083346  262957 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0816 22:25:54.090012  262957 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:25:54.090068  262957 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:25:54.096369  262957 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (603 bytes)
	I0816 22:25:54.107861  262957 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0816 22:25:54.119303  262957 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I0816 22:25:54.130633  262957 ssh_runner.go:149] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0816 22:25:54.133217  262957 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:25:54.141396  262957 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487 for IP: 192.168.67.2
	I0816 22:25:54.141447  262957 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:25:54.141471  262957 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:25:54.141535  262957 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/client.key
	I0816 22:25:54.141563  262957 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/apiserver.key.c7fa3a9e
	I0816 22:25:54.141596  262957 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/proxy-client.key
	I0816 22:25:54.141717  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem (1338 bytes)
	W0816 22:25:54.141762  262957 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487_empty.pem, impossibly tiny 0 bytes
	I0816 22:25:54.141774  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 22:25:54.141803  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:25:54.141827  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:25:54.141848  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0816 22:25:54.141897  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:25:54.142744  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:25:54.158540  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 22:25:54.174181  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:25:54.190076  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 22:25:54.205410  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:25:54.220130  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 22:25:54.235298  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:25:54.251605  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 22:25:54.268123  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /usr/share/ca-certificates/64872.pem (1708 bytes)
	I0816 22:25:54.283499  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:25:54.298583  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem --> /usr/share/ca-certificates/6487.pem (1338 bytes)
	I0816 22:25:54.314024  262957 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:25:54.325021  262957 ssh_runner.go:149] Run: openssl version
	I0816 22:25:54.329401  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/64872.pem && ln -fs /usr/share/ca-certificates/64872.pem /etc/ssl/certs/64872.pem"
	I0816 22:25:54.335940  262957 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/64872.pem
	I0816 22:25:54.338596  262957 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:50 /usr/share/ca-certificates/64872.pem
	I0816 22:25:54.338638  262957 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/64872.pem
	I0816 22:25:54.342906  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/64872.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:25:54.348826  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:25:54.358858  262957 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:25:54.361641  262957 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:25:54.361673  262957 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:25:54.365977  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:25:54.372154  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6487.pem && ln -fs /usr/share/ca-certificates/6487.pem /etc/ssl/certs/6487.pem"
	I0816 22:25:54.378858  262957 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6487.pem
	I0816 22:25:54.381576  262957 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:50 /usr/share/ca-certificates/6487.pem
	I0816 22:25:54.381623  262957 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6487.pem
	I0816 22:25:54.386036  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6487.pem /etc/ssl/certs/51391683.0"
	I0816 22:25:54.391898  262957 kubeadm.go:390] StartCluster: {Name:newest-cni-20210816222436-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816222436-6487 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:
false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:25:54.392022  262957 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 22:25:54.392052  262957 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:25:54.414245  262957 cri.go:76] found id: ""
	I0816 22:25:54.414284  262957 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:25:54.420413  262957 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0816 22:25:54.420436  262957 kubeadm.go:600] restartCluster start
	I0816 22:25:54.420466  262957 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0816 22:25:54.426072  262957 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:54.426966  262957 kubeconfig.go:117] verify returned: extract IP: "newest-cni-20210816222436-6487" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:25:54.427382  262957 kubeconfig.go:128] "newest-cni-20210816222436-6487" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0816 22:25:54.428106  262957 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:25:54.430425  262957 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 22:25:54.436260  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:54.436301  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:54.447743  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:54.648124  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:54.648202  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:54.661570  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:54.848823  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:54.848884  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:54.862082  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.048130  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:55.048196  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:55.061645  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.247861  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:55.247956  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:55.262026  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.448347  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:55.448414  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:55.461467  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.648695  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:55.648774  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:55.661684  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.847947  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:55.848042  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:55.862542  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:56.048736  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:56.048800  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:56.061836  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:56.248110  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:56.248200  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:56.261360  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:56.448639  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:56.448705  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:56.461500  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:56.648623  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:56.648703  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:56.662181  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.995370  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:58.495829  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:56.651402  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:58.651440  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:56.848603  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:56.848665  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:56.861212  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.048524  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:57.048591  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:57.061580  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.248828  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:57.248911  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:57.261828  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.448121  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:57.448188  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:57.461171  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.461189  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:57.461225  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:57.512239  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.512268  262957 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0816 22:25:57.512276  262957 kubeadm.go:1032] stopping kube-system containers ...
	I0816 22:25:57.512288  262957 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 22:25:57.512336  262957 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:25:57.536298  262957 cri.go:76] found id: ""
	I0816 22:25:57.536370  262957 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0816 22:25:57.545155  262957 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:25:57.551792  262957 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5643 Aug 16 22:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug 16 22:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Aug 16 22:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 16 22:24 /etc/kubernetes/scheduler.conf
	
	I0816 22:25:57.551856  262957 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 22:25:57.558184  262957 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 22:25:57.564274  262957 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 22:25:57.570245  262957 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.570290  262957 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 22:25:57.576131  262957 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 22:25:57.582547  262957 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.582595  262957 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 22:25:57.588511  262957 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:25:57.594494  262957 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0816 22:25:57.594510  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:57.636811  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:58.457317  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:58.574142  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:58.631732  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:58.680441  262957 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:25:58.680500  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:59.195406  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:59.695787  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:00.195739  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:00.694833  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:01.194883  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:01.695030  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:00.496189  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:02.994975  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:01.151777  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:03.650774  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:02.195405  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:02.695613  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:03.195523  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:03.695735  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:04.195172  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:04.695313  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:05.194844  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:05.228254  262957 api_server.go:70] duration metric: took 6.54781428s to wait for apiserver process to appear ...
	I0816 22:26:05.228278  262957 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:26:05.228288  262957 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:26:04.995640  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:06.995858  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:08.521501  262957 api_server.go:265] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:26:08.521534  262957 api_server.go:101] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:26:09.022198  262957 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:26:09.028317  262957 api_server.go:265] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:26:09.028345  262957 api_server.go:101] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:26:09.521603  262957 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:26:09.526189  262957 api_server.go:265] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:26:09.526218  262957 api_server.go:101] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:26:10.021661  262957 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:26:10.027811  262957 api_server.go:265] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0816 22:26:10.035180  262957 api_server.go:139] control plane version: v1.22.0-rc.0
	I0816 22:26:10.035212  262957 api_server.go:129] duration metric: took 4.806927084s to wait for apiserver health ...
	I0816 22:26:10.035225  262957 cni.go:93] Creating CNI manager for ""
	I0816 22:26:10.035233  262957 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:26:09.964461  238595 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (30.898528802s)
	I0816 22:26:09.964528  238595 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:26:09.973997  238595 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 22:26:09.974062  238595 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:26:09.999861  238595 cri.go:76] found id: ""
	I0816 22:26:09.999951  238595 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:26:10.007018  238595 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0816 22:26:10.007067  238595 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:26:10.013415  238595 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:26:10.013459  238595 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0816 22:26:05.657515  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:08.152187  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:10.115193  262957 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 22:26:10.115266  262957 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0816 22:26:10.120908  262957 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl ...
	I0816 22:26:10.120933  262957 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0816 22:26:10.134935  262957 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 22:26:10.353050  262957 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:26:10.366285  262957 system_pods.go:59] 10 kube-system pods found
	I0816 22:26:10.366331  262957 system_pods.go:61] "coredns-78fcd69978-nqx44" [6fe4486f-609a-4711-8984-d211fafbc14a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:26:10.366343  262957 system_pods.go:61] "coredns-78fcd69978-sh8hf" [99ca4da4-63c0-4eb5-b1a9-824580994bf0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:26:10.366357  262957 system_pods.go:61] "etcd-newest-cni-20210816222436-6487" [3161f296-8eb0-48ae-8bbd-1a3104e0c5cc] Running
	I0816 22:26:10.366369  262957 system_pods.go:61] "kindnet-4wtm6" [f784c344-70ae-41f8-b749-4bd3d26179d1] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0816 22:26:10.366379  262957 system_pods.go:61] "kube-apiserver-newest-cni-20210816222436-6487" [b48447a9-bba3-4ef7-89ab-ee9a020ac10b] Running
	I0816 22:26:10.366393  262957 system_pods.go:61] "kube-controller-manager-newest-cni-20210816222436-6487" [0d5a549e-612d-463d-9383-0ee0d9dd2a5c] Running
	I0816 22:26:10.366402  262957 system_pods.go:61] "kube-proxy-242br" [91a06e4b-7a8f-4f7c-a698-3f40c4024f1f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 22:26:10.366411  262957 system_pods.go:61] "kube-scheduler-newest-cni-20210816222436-6487" [c46e8919-9bc3-4dbc-ba13-c0d0b8c2ee7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 22:26:10.366419  262957 system_pods.go:61] "metrics-server-7c784ccb57-j52xp" [8b98c23d-9fa2-44dd-b9af-b1bf3215cd88] Pending
	I0816 22:26:10.366427  262957 system_pods.go:61] "storage-provisioner" [a71ed147-1a32-4360-9bcc-722db25ff42e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:26:10.366434  262957 system_pods.go:74] duration metric: took 13.36244ms to wait for pod list to return data ...
	I0816 22:26:10.366443  262957 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:26:10.369938  262957 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:26:10.369965  262957 node_conditions.go:123] node cpu capacity is 8
	I0816 22:26:10.369980  262957 node_conditions.go:105] duration metric: took 3.531866ms to run NodePressure ...
	I0816 22:26:10.370000  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:26:10.628407  262957 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:26:10.646464  262957 ops.go:34] apiserver oom_adj: -16
	I0816 22:26:10.646488  262957 kubeadm.go:604] restartCluster took 16.226044614s
	I0816 22:26:10.646497  262957 kubeadm.go:392] StartCluster complete in 16.254606324s
	I0816 22:26:10.646519  262957 settings.go:142] acquiring lock: {Name:mk71c9e00e0a208f5191d6b85d29a074b46503a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:26:10.646648  262957 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:26:10.648250  262957 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:26:10.653233  262957 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20210816222436-6487" rescaled to 1
	I0816 22:26:10.653302  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:26:10.653319  262957 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:26:10.653298  262957 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0816 22:26:10.653366  262957 addons.go:59] Setting storage-provisioner=true in profile "newest-cni-20210816222436-6487"
	I0816 22:26:10.653381  262957 addons.go:135] Setting addon storage-provisioner=true in "newest-cni-20210816222436-6487"
	W0816 22:26:10.653387  262957 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:26:10.653413  262957 host.go:66] Checking if "newest-cni-20210816222436-6487" exists ...
	I0816 22:26:10.653421  262957 addons.go:59] Setting default-storageclass=true in profile "newest-cni-20210816222436-6487"
	I0816 22:26:10.653452  262957 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20210816222436-6487"
	I0816 22:26:10.653502  262957 config.go:177] Loaded profile config "newest-cni-20210816222436-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0816 22:26:10.653557  262957 addons.go:59] Setting metrics-server=true in profile "newest-cni-20210816222436-6487"
	I0816 22:26:10.653574  262957 addons.go:135] Setting addon metrics-server=true in "newest-cni-20210816222436-6487"
	W0816 22:26:10.653581  262957 addons.go:147] addon metrics-server should already be in state true
	I0816 22:26:10.653607  262957 host.go:66] Checking if "newest-cni-20210816222436-6487" exists ...
	I0816 22:26:10.653741  262957 addons.go:59] Setting dashboard=true in profile "newest-cni-20210816222436-6487"
	I0816 22:26:10.653767  262957 addons.go:135] Setting addon dashboard=true in "newest-cni-20210816222436-6487"
	W0816 22:26:10.653776  262957 addons.go:147] addon dashboard should already be in state true
	I0816 22:26:10.653788  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:10.653811  262957 host.go:66] Checking if "newest-cni-20210816222436-6487" exists ...
	I0816 22:26:10.653952  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:10.654110  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:10.655941  262957 out.go:177] * Verifying Kubernetes components...
	I0816 22:26:10.654275  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:10.656048  262957 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:26:10.718128  262957 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:26:10.718268  262957 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:26:10.718285  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:26:10.718346  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:26:10.724823  262957 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:26:10.728589  262957 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:26:10.728694  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:26:10.728708  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:26:10.728778  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:26:10.372762  238595 out.go:204]   - Generating certificates and keys ...
	I0816 22:26:10.735593  262957 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:26:10.735667  262957 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:26:10.735676  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:26:10.735731  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:26:10.736243  262957 addons.go:135] Setting addon default-storageclass=true in "newest-cni-20210816222436-6487"
	W0816 22:26:10.736275  262957 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:26:10.736305  262957 host.go:66] Checking if "newest-cni-20210816222436-6487" exists ...
	I0816 22:26:10.736853  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:10.773177  262957 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:26:10.773242  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:10.774623  262957 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0816 22:26:10.789622  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:26:10.795332  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:26:10.807716  262957 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:26:10.807742  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:26:10.807797  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:26:10.818897  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:26:10.828432  262957 api_server.go:70] duration metric: took 175.046767ms to wait for apiserver process to appear ...
	I0816 22:26:10.828463  262957 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:26:10.828475  262957 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:26:10.835641  262957 api_server.go:265] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0816 22:26:10.836517  262957 api_server.go:139] control plane version: v1.22.0-rc.0
	I0816 22:26:10.836536  262957 api_server.go:129] duration metric: took 8.066334ms to wait for apiserver health ...
	I0816 22:26:10.836544  262957 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:26:10.844801  262957 system_pods.go:59] 10 kube-system pods found
	I0816 22:26:10.844830  262957 system_pods.go:61] "coredns-78fcd69978-nqx44" [6fe4486f-609a-4711-8984-d211fafbc14a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:26:10.844841  262957 system_pods.go:61] "coredns-78fcd69978-sh8hf" [99ca4da4-63c0-4eb5-b1a9-824580994bf0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:26:10.844849  262957 system_pods.go:61] "etcd-newest-cni-20210816222436-6487" [3161f296-8eb0-48ae-8bbd-1a3104e0c5cc] Running
	I0816 22:26:10.844862  262957 system_pods.go:61] "kindnet-4wtm6" [f784c344-70ae-41f8-b749-4bd3d26179d1] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0816 22:26:10.844871  262957 system_pods.go:61] "kube-apiserver-newest-cni-20210816222436-6487" [b48447a9-bba3-4ef7-89ab-ee9a020ac10b] Running
	I0816 22:26:10.844881  262957 system_pods.go:61] "kube-controller-manager-newest-cni-20210816222436-6487" [0d5a549e-612d-463d-9383-0ee0d9dd2a5c] Running
	I0816 22:26:10.844892  262957 system_pods.go:61] "kube-proxy-242br" [91a06e4b-7a8f-4f7c-a698-3f40c4024f1f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 22:26:10.844903  262957 system_pods.go:61] "kube-scheduler-newest-cni-20210816222436-6487" [c46e8919-9bc3-4dbc-ba13-c0d0b8c2ee7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 22:26:10.844920  262957 system_pods.go:61] "metrics-server-7c784ccb57-j52xp" [8b98c23d-9fa2-44dd-b9af-b1bf3215cd88] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:26:10.844930  262957 system_pods.go:61] "storage-provisioner" [a71ed147-1a32-4360-9bcc-722db25ff42e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:26:10.844937  262957 system_pods.go:74] duration metric: took 8.387271ms to wait for pod list to return data ...
	I0816 22:26:10.844948  262957 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:26:10.847353  262957 default_sa.go:45] found service account: "default"
	I0816 22:26:10.847370  262957 default_sa.go:55] duration metric: took 2.413533ms for default service account to be created ...
	I0816 22:26:10.847380  262957 kubeadm.go:547] duration metric: took 194.000457ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0816 22:26:10.847401  262957 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:26:10.849463  262957 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:26:10.849480  262957 node_conditions.go:123] node cpu capacity is 8
	I0816 22:26:10.849495  262957 node_conditions.go:105] duration metric: took 2.085396ms to run NodePressure ...
	I0816 22:26:10.849509  262957 start.go:231] waiting for startup goroutines ...
	I0816 22:26:10.862435  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:26:10.919082  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:26:10.919107  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:26:10.928768  262957 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:26:10.928790  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:26:10.936226  262957 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:26:10.939559  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:26:10.939580  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:26:10.947378  262957 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:26:10.947440  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:26:10.956321  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:26:10.956344  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:26:10.959897  262957 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:26:11.016118  262957 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:26:11.016141  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:26:11.021575  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:26:11.021599  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:26:11.031871  262957 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:26:11.038943  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:26:11.038964  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:26:11.137497  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:26:11.137523  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:26:11.217513  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:26:11.217538  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:26:11.232958  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:26:11.232983  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:26:11.248587  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:26:11.248612  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:26:11.327831  262957 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:26:11.543579  262957 addons.go:313] Verifying addon metrics-server=true in "newest-cni-20210816222436-6487"
	I0816 22:26:11.719432  262957 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0816 22:26:11.719457  262957 addons.go:344] enableAddons completed in 1.066141103s
	I0816 22:26:11.764284  262957 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0816 22:26:11.765766  262957 out.go:177] 
	W0816 22:26:11.765889  262957 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0816 22:26:11.767364  262957 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0816 22:26:11.768745  262957 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210816222436-6487" cluster and "default" namespace by default
	I0816 22:26:10.885563  238595 out.go:204]   - Booting up control plane ...
	I0816 22:26:09.496573  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:11.995982  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:10.651451  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:13.151979  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:15.153271  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:13.996111  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:16.495078  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:18.495570  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:17.651242  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:20.151935  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:20.496216  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:22.996199  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:24.933950  238595 out.go:204]   - Configuring RBAC rules ...
	I0816 22:26:25.352976  238595 cni.go:93] Creating CNI manager for ""
	I0816 22:26:25.353002  238595 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:26:22.650222  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:23.146708  240293 pod_ready.go:81] duration metric: took 4m0.400635585s waiting for pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace to be "Ready" ...
	E0816 22:26:23.146730  240293 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 22:26:23.146749  240293 pod_ready.go:38] duration metric: took 4m42.319875628s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:26:23.146776  240293 kubeadm.go:604] restartCluster took 4m59.914882197s
	W0816 22:26:23.146936  240293 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0816 22:26:23.146993  240293 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 22:26:25.355246  238595 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 22:26:25.355311  238595 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0816 22:26:25.358718  238595 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0816 22:26:25.358738  238595 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0816 22:26:25.370945  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 22:26:25.621157  238595 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:26:25.621206  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:25.621226  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=default-k8s-different-port-20210816221939-6487 minikube.k8s.io/updated_at=2021_08_16T22_26_25_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:25.733924  238595 ops.go:34] apiserver oom_adj: -16
	I0816 22:26:25.733912  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:26.298743  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:26.798723  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:27.298752  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:24.996387  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:27.495135  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:27.798667  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:28.298823  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:28.798898  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:29.299125  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:29.798939  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:30.298461  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:30.799163  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:31.298377  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:31.798518  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:32.299080  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:29.495517  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:31.495703  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:33.496362  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:32.798224  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:33.298433  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:33.799075  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:34.298503  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:34.798223  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:35.299182  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:35.798578  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:36.298228  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:36.798801  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:37.299144  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:35.996187  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:38.495700  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:37.798260  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:38.298197  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:38.798424  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:38.917845  238595 kubeadm.go:985] duration metric: took 13.296684424s to wait for elevateKubeSystemPrivileges.
	I0816 22:26:38.917877  238595 kubeadm.go:392] StartCluster complete in 5m29.078278154s
	I0816 22:26:38.917895  238595 settings.go:142] acquiring lock: {Name:mk71c9e00e0a208f5191d6b85d29a074b46503a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:26:38.917976  238595 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:26:38.919347  238595 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:26:39.435280  238595 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20210816221939-6487" rescaled to 1
	I0816 22:26:39.435337  238595 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 22:26:39.436884  238595 out.go:177] * Verifying Kubernetes components...
	I0816 22:26:39.435381  238595 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:26:39.436944  238595 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:26:39.435407  238595 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:26:39.437054  238595 addons.go:59] Setting storage-provisioner=true in profile "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.437066  238595 addons.go:59] Setting dashboard=true in profile "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.437084  238595 addons.go:59] Setting default-storageclass=true in profile "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.437097  238595 addons.go:135] Setting addon dashboard=true in "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.437107  238595 addons.go:59] Setting metrics-server=true in profile "default-k8s-different-port-20210816221939-6487"
	W0816 22:26:39.437111  238595 addons.go:147] addon dashboard should already be in state true
	I0816 22:26:39.437119  238595 addons.go:135] Setting addon metrics-server=true in "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.435601  238595 config.go:177] Loaded profile config "default-k8s-different-port-20210816221939-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	W0816 22:26:39.437127  238595 addons.go:147] addon metrics-server should already be in state true
	I0816 22:26:39.437075  238595 addons.go:135] Setting addon storage-provisioner=true in "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.437147  238595 host.go:66] Checking if "default-k8s-different-port-20210816221939-6487" exists ...
	I0816 22:26:39.437156  238595 host.go:66] Checking if "default-k8s-different-port-20210816221939-6487" exists ...
	W0816 22:26:39.437157  238595 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:26:39.437098  238595 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.437219  238595 host.go:66] Checking if "default-k8s-different-port-20210816221939-6487" exists ...
	I0816 22:26:39.437580  238595 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210816221939-6487 --format={{.State.Status}}
	I0816 22:26:39.437673  238595 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210816221939-6487 --format={{.State.Status}}
	I0816 22:26:39.437680  238595 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210816221939-6487 --format={{.State.Status}}
	I0816 22:26:39.437786  238595 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210816221939-6487 --format={{.State.Status}}
	I0816 22:26:39.450925  238595 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20210816221939-6487" to be "Ready" ...
	I0816 22:26:39.454454  238595 node_ready.go:49] node "default-k8s-different-port-20210816221939-6487" has status "Ready":"True"
	I0816 22:26:39.454478  238595 node_ready.go:38] duration metric: took 3.504801ms waiting for node "default-k8s-different-port-20210816221939-6487" to be "Ready" ...
	I0816 22:26:39.454492  238595 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:26:39.461585  238595 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-n6ddn" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:39.496014  238595 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:26:39.496143  238595 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:26:39.496159  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:26:39.497741  238595 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:26:39.496222  238595 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210816221939-6487
	I0816 22:26:39.497808  238595 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:26:39.497821  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:26:39.497865  238595 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210816221939-6487
	I0816 22:26:39.499561  238595 addons.go:135] Setting addon default-storageclass=true in "default-k8s-different-port-20210816221939-6487"
	W0816 22:26:39.499598  238595 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:26:39.499623  238595 host.go:66] Checking if "default-k8s-different-port-20210816221939-6487" exists ...
	I0816 22:26:39.500057  238595 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210816221939-6487 --format={{.State.Status}}
	I0816 22:26:39.508968  238595 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:26:39.510786  238595 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:26:39.510877  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:26:39.510894  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:26:39.510963  238595 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210816221939-6487
	I0816 22:26:39.543137  238595 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:26:39.551327  238595 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:26:39.551354  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:26:39.551418  238595 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210816221939-6487
	I0816 22:26:39.562469  238595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816221939-6487/id_rsa Username:docker}
	I0816 22:26:39.567015  238595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816221939-6487/id_rsa Username:docker}
	I0816 22:26:39.585895  238595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816221939-6487/id_rsa Username:docker}
	I0816 22:26:39.601932  238595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816221939-6487/id_rsa Username:docker}
	I0816 22:26:39.730192  238595 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:26:39.730216  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:26:39.735004  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:26:39.735028  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:26:39.825712  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:26:39.825735  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:26:39.828025  238595 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:26:39.828046  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:26:39.829939  238595 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:26:39.830581  238595 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:26:39.917562  238595 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:26:39.917594  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:26:39.918416  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:26:39.918442  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:26:39.934239  238595 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:26:39.935303  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:26:39.935323  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:26:40.024142  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:26:40.024168  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:26:40.121870  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:26:40.121954  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:26:40.213303  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:26:40.213329  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:26:40.226600  238595 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0816 22:26:40.233649  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:26:40.233674  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:26:40.315993  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:26:40.316021  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:26:40.329860  238595 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:26:40.913110  238595 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.08249574s)
	I0816 22:26:41.119373  238595 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.185088873s)
	I0816 22:26:41.119413  238595 addons.go:313] Verifying addon metrics-server=true in "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:41.513353  238595 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.183438758s)
	I0816 22:26:41.515520  238595 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0816 22:26:41.515560  238595 addons.go:344] enableAddons completed in 2.080164328s
	I0816 22:26:41.516293  238595 pod_ready.go:102] pod "coredns-558bd4d5db-n6ddn" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:40.996044  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:42.996463  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:43.970224  238595 pod_ready.go:102] pod "coredns-558bd4d5db-n6ddn" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:45.016130  238595 pod_ready.go:92] pod "coredns-558bd4d5db-n6ddn" in "kube-system" namespace has status "Ready":"True"
	I0816 22:26:45.016153  238595 pod_ready.go:81] duration metric: took 5.554536838s waiting for pod "coredns-558bd4d5db-n6ddn" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.016169  238595 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.020503  238595 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:26:45.020523  238595 pod_ready.go:81] duration metric: took 4.344641ms waiting for pod "etcd-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.020537  238595 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.024738  238595 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:26:45.024753  238595 pod_ready.go:81] duration metric: took 4.208942ms waiting for pod "kube-apiserver-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.024762  238595 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.028646  238595 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:26:45.028661  238595 pod_ready.go:81] duration metric: took 3.89128ms waiting for pod "kube-controller-manager-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.028670  238595 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4pmgn" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.032791  238595 pod_ready.go:92] pod "kube-proxy-4pmgn" in "kube-system" namespace has status "Ready":"True"
	I0816 22:26:45.032812  238595 pod_ready.go:81] duration metric: took 4.13529ms waiting for pod "kube-proxy-4pmgn" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.032823  238595 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.369533  238595 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:26:45.369559  238595 pod_ready.go:81] duration metric: took 336.726404ms waiting for pod "kube-scheduler-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.369571  238595 pod_ready.go:38] duration metric: took 5.915063438s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:26:45.369595  238595 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:26:45.369645  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:45.395595  238595 api_server.go:70] duration metric: took 5.960222514s to wait for apiserver process to appear ...
	I0816 22:26:45.395625  238595 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:26:45.395637  238595 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0816 22:26:45.400217  238595 api_server.go:265] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0816 22:26:45.401067  238595 api_server.go:139] control plane version: v1.21.3
	I0816 22:26:45.401089  238595 api_server.go:129] duration metric: took 5.457124ms to wait for apiserver health ...
	I0816 22:26:45.401099  238595 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:26:45.570973  238595 system_pods.go:59] 9 kube-system pods found
	I0816 22:26:45.571001  238595 system_pods.go:61] "coredns-558bd4d5db-n6ddn" [6ea194b9-97b1-4f8b-b10a-e53c58d0b815] Running
	I0816 22:26:45.571006  238595 system_pods.go:61] "etcd-default-k8s-different-port-20210816221939-6487" [c8c2a683-c482-47ca-888f-c223fa5fc1a2] Running
	I0816 22:26:45.571016  238595 system_pods.go:61] "kindnet-5x8jh" [3ef671f3-edb1-4112-8340-37f83bc48660] Running
	I0816 22:26:45.571020  238595 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210816221939-6487" [25fe7b38-f39b-4718-956a-b47bd970723a] Running
	I0816 22:26:45.571025  238595 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210816221939-6487" [24ad0f39-a711-48df-9b8e-d9e0e08ee4f6] Running
	I0816 22:26:45.571028  238595 system_pods.go:61] "kube-proxy-4pmgn" [e049b17c-98b6-4692-8e5a-61c73abc97c2] Running
	I0816 22:26:45.571032  238595 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210816221939-6487" [2c0db59b-68b7-4831-be15-4f882953eec5] Running
	I0816 22:26:45.571039  238595 system_pods.go:61] "metrics-server-7c784ccb57-lfkmq" [d9309c70-8cf5-4fdc-a79f-1c85f9ceda55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:26:45.571069  238595 system_pods.go:61] "storage-provisioner" [f876c73e-e1dd-4f3c-9bad-866adba8a427] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:26:45.571074  238595 system_pods.go:74] duration metric: took 169.970426ms to wait for pod list to return data ...
	I0816 22:26:45.571085  238595 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:26:45.768620  238595 default_sa.go:45] found service account: "default"
	I0816 22:26:45.768644  238595 default_sa.go:55] duration metric: took 197.553773ms for default service account to be created ...
	I0816 22:26:45.768653  238595 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:26:45.970940  238595 system_pods.go:86] 9 kube-system pods found
	I0816 22:26:45.970973  238595 system_pods.go:89] "coredns-558bd4d5db-n6ddn" [6ea194b9-97b1-4f8b-b10a-e53c58d0b815] Running
	I0816 22:26:45.970982  238595 system_pods.go:89] "etcd-default-k8s-different-port-20210816221939-6487" [c8c2a683-c482-47ca-888f-c223fa5fc1a2] Running
	I0816 22:26:45.970987  238595 system_pods.go:89] "kindnet-5x8jh" [3ef671f3-edb1-4112-8340-37f83bc48660] Running
	I0816 22:26:45.970993  238595 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20210816221939-6487" [25fe7b38-f39b-4718-956a-b47bd970723a] Running
	I0816 22:26:45.971000  238595 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20210816221939-6487" [24ad0f39-a711-48df-9b8e-d9e0e08ee4f6] Running
	I0816 22:26:45.971006  238595 system_pods.go:89] "kube-proxy-4pmgn" [e049b17c-98b6-4692-8e5a-61c73abc97c2] Running
	I0816 22:26:45.971013  238595 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20210816221939-6487" [2c0db59b-68b7-4831-be15-4f882953eec5] Running
	I0816 22:26:45.971024  238595 system_pods.go:89] "metrics-server-7c784ccb57-lfkmq" [d9309c70-8cf5-4fdc-a79f-1c85f9ceda55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:26:45.971037  238595 system_pods.go:89] "storage-provisioner" [f876c73e-e1dd-4f3c-9bad-866adba8a427] Running
	I0816 22:26:45.971046  238595 system_pods.go:126] duration metric: took 202.387682ms to wait for k8s-apps to be running ...
	I0816 22:26:45.971061  238595 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:26:45.971104  238595 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:26:46.023089  238595 system_svc.go:56] duration metric: took 52.020591ms WaitForService to wait for kubelet.
	I0816 22:26:46.023116  238595 kubeadm.go:547] duration metric: took 6.587748491s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:26:46.023141  238595 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:26:46.168888  238595 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:26:46.168915  238595 node_conditions.go:123] node cpu capacity is 8
	I0816 22:26:46.168933  238595 node_conditions.go:105] duration metric: took 145.786239ms to run NodePressure ...
	I0816 22:26:46.168945  238595 start.go:231] waiting for startup goroutines ...
	I0816 22:26:46.211558  238595 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0816 22:26:46.214728  238595 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20210816221939-6487" cluster and "default" namespace by default
	I0816 22:26:45.495975  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:47.496653  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:49.995957  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:52.496048  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:54.204913  240293 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.057884699s)
	I0816 22:26:54.204974  240293 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:26:54.214048  240293 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 22:26:54.214110  240293 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:26:54.236967  240293 cri.go:76] found id: ""
	I0816 22:26:54.237019  240293 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:26:54.243553  240293 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0816 22:26:54.243606  240293 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:26:54.249971  240293 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:26:54.250416  240293 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0816 22:26:54.516364  240293 out.go:204]   - Generating certificates and keys ...
	I0816 22:26:55.249703  240293 out.go:204]   - Booting up control plane ...
	I0816 22:26:54.996103  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:57.495660  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:59.495743  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:01.995335  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:03.995379  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:05.995637  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:08.496092  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:09.298595  240293 out.go:204]   - Configuring RBAC rules ...
	I0816 22:27:09.713304  240293 cni.go:93] Creating CNI manager for ""
	I0816 22:27:09.713327  240293 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:27:09.715227  240293 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 22:27:09.715277  240293 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0816 22:27:09.718863  240293 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0816 22:27:09.718885  240293 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0816 22:27:09.731677  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 22:27:09.962283  240293 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:27:09.962350  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:09.962373  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=embed-certs-20210816221913-6487 minikube.k8s.io/updated_at=2021_08_16T22_27_09_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:10.060642  240293 ops.go:34] apiserver oom_adj: -16
	I0816 22:27:10.060723  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:10.995882  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:13.495974  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:10.633246  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:11.133139  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:11.633557  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:12.133518  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:12.633029  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:13.132949  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:13.632656  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:14.133534  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:14.632964  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:15.133130  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:15.496295  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:17.995970  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:15.632812  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:16.132692  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:16.633691  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:17.133141  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:17.632912  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:18.132865  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:18.633533  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:19.132892  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:19.632997  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:20.133122  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:20.496121  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:22.995237  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:20.633092  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:21.132697  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:21.632742  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:22.133291  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:22.632839  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:23.133425  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:23.632752  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:24.132877  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:24.198432  240293 kubeadm.go:985] duration metric: took 14.236137948s to wait for elevateKubeSystemPrivileges.
	I0816 22:27:24.198462  240293 kubeadm.go:392] StartCluster complete in 6m0.995598802s
	I0816 22:27:24.198481  240293 settings.go:142] acquiring lock: {Name:mk71c9e00e0a208f5191d6b85d29a074b46503a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:27:24.198572  240293 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:27:24.200345  240293 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:27:24.715145  240293 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20210816221913-6487" rescaled to 1
	I0816 22:27:24.715193  240293 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 22:27:24.717805  240293 out.go:177] * Verifying Kubernetes components...
	I0816 22:27:24.717866  240293 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:27:24.715250  240293 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:27:24.715269  240293 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:27:24.717969  240293 addons.go:59] Setting storage-provisioner=true in profile "embed-certs-20210816221913-6487"
	I0816 22:27:24.717988  240293 addons.go:135] Setting addon storage-provisioner=true in "embed-certs-20210816221913-6487"
	W0816 22:27:24.717999  240293 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:27:24.718001  240293 addons.go:59] Setting default-storageclass=true in profile "embed-certs-20210816221913-6487"
	I0816 22:27:24.718022  240293 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20210816221913-6487"
	I0816 22:27:24.718032  240293 host.go:66] Checking if "embed-certs-20210816221913-6487" exists ...
	I0816 22:27:24.718039  240293 addons.go:59] Setting metrics-server=true in profile "embed-certs-20210816221913-6487"
	I0816 22:27:24.718052  240293 addons.go:135] Setting addon metrics-server=true in "embed-certs-20210816221913-6487"
	I0816 22:27:24.717986  240293 addons.go:59] Setting dashboard=true in profile "embed-certs-20210816221913-6487"
	I0816 22:27:24.718085  240293 addons.go:135] Setting addon dashboard=true in "embed-certs-20210816221913-6487"
	W0816 22:27:24.718100  240293 addons.go:147] addon dashboard should already be in state true
	I0816 22:27:24.718131  240293 host.go:66] Checking if "embed-certs-20210816221913-6487" exists ...
	I0816 22:27:24.718343  240293 cli_runner.go:115] Run: docker container inspect embed-certs-20210816221913-6487 --format={{.State.Status}}
	I0816 22:27:24.715429  240293 config.go:177] Loaded profile config "embed-certs-20210816221913-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	W0816 22:27:24.718059  240293 addons.go:147] addon metrics-server should already be in state true
	I0816 22:27:24.718417  240293 host.go:66] Checking if "embed-certs-20210816221913-6487" exists ...
	I0816 22:27:24.718547  240293 cli_runner.go:115] Run: docker container inspect embed-certs-20210816221913-6487 --format={{.State.Status}}
	I0816 22:27:24.718594  240293 cli_runner.go:115] Run: docker container inspect embed-certs-20210816221913-6487 --format={{.State.Status}}
	I0816 22:27:24.718818  240293 cli_runner.go:115] Run: docker container inspect embed-certs-20210816221913-6487 --format={{.State.Status}}
	I0816 22:27:24.782293  240293 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:27:24.783873  240293 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:27:24.782196  240293 addons.go:135] Setting addon default-storageclass=true in "embed-certs-20210816221913-6487"
	W0816 22:27:24.783987  240293 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:27:24.784020  240293 host.go:66] Checking if "embed-certs-20210816221913-6487" exists ...
	I0816 22:27:24.784033  240293 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:27:24.784044  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:27:24.785627  240293 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:27:24.785699  240293 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:27:24.785710  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:27:24.784098  240293 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:27:24.785767  240293 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:27:24.784669  240293 cli_runner.go:115] Run: docker container inspect embed-certs-20210816221913-6487 --format={{.State.Status}}
	I0816 22:27:24.787448  240293 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:27:24.787521  240293 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:27:24.787537  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:27:24.787582  240293 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:27:24.844134  240293 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20210816221913-6487" to be "Ready" ...
	I0816 22:27:24.844870  240293 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:27:24.854809  240293 node_ready.go:49] node "embed-certs-20210816221913-6487" has status "Ready":"True"
	I0816 22:27:24.854830  240293 node_ready.go:38] duration metric: took 10.664038ms waiting for node "embed-certs-20210816221913-6487" to be "Ready" ...
	I0816 22:27:24.854841  240293 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:27:24.855545  240293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32959 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:27:24.861143  240293 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-4zdn7" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:24.861336  240293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32959 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:27:24.863265  240293 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:27:24.863285  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:27:24.863344  240293 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:27:24.865862  240293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32959 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:27:24.902450  240293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32959 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:27:25.213259  240293 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:27:25.213287  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:27:25.213568  240293 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:27:25.233517  240293 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:27:25.239365  240293 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:27:25.239389  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:27:25.313683  240293 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:27:25.313712  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:27:25.433541  240293 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:27:25.433568  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:27:25.434948  240293 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:27:25.434968  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:27:25.527034  240293 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:27:25.527059  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:27:25.613745  240293 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:27:25.613777  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:27:25.625813  240293 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:27:25.625851  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:27:25.713538  240293 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:27:25.726637  240293 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:27:25.726666  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:27:25.734858  240293 start.go:728] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0816 22:27:25.820941  240293 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:27:25.820971  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:27:25.840244  240293 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:27:25.840270  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:27:25.925179  240293 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:27:25.925202  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:27:26.021980  240293 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:27:26.324641  240293 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.111035481s)
	I0816 22:27:26.324667  240293 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.091124904s)
	I0816 22:27:26.939142  240293 pod_ready.go:102] pod "coredns-558bd4d5db-4zdn7" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:27.022283  240293 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.30869814s)
	I0816 22:27:27.022370  240293 addons.go:313] Verifying addon metrics-server=true in "embed-certs-20210816221913-6487"
	I0816 22:27:27.431601  240293 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.409553263s)
	I0816 22:27:24.996042  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:27.495421  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:27.433693  240293 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0816 22:27:27.433723  240293 addons.go:344] enableAddons completed in 2.718461512s
	I0816 22:27:29.427787  240293 pod_ready.go:102] pod "coredns-558bd4d5db-4zdn7" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:29.496073  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:31.995232  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:31.927352  240293 pod_ready.go:102] pod "coredns-558bd4d5db-4zdn7" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:32.427442  240293 pod_ready.go:92] pod "coredns-558bd4d5db-4zdn7" in "kube-system" namespace has status "Ready":"True"
	I0816 22:27:32.427460  240293 pod_ready.go:81] duration metric: took 7.566292628s waiting for pod "coredns-558bd4d5db-4zdn7" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:32.427472  240293 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-tc25b" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:34.437803  240293 pod_ready.go:102] pod "coredns-558bd4d5db-tc25b" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:34.934910  240293 pod_ready.go:97] error getting pod "coredns-558bd4d5db-tc25b" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-tc25b" not found
	I0816 22:27:34.934937  240293 pod_ready.go:81] duration metric: took 2.507455875s waiting for pod "coredns-558bd4d5db-tc25b" in "kube-system" namespace to be "Ready" ...
	E0816 22:27:34.934947  240293 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-tc25b" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-tc25b" not found
	I0816 22:27:34.934954  240293 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:34.938786  240293 pod_ready.go:92] pod "etcd-embed-certs-20210816221913-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:27:34.938802  240293 pod_ready.go:81] duration metric: took 3.83976ms waiting for pod "etcd-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:34.938813  240293 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:34.945030  240293 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210816221913-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:27:34.945045  240293 pod_ready.go:81] duration metric: took 6.225501ms waiting for pod "kube-apiserver-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:34.945054  240293 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:34.948474  240293 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210816221913-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:27:34.948489  240293 pod_ready.go:81] duration metric: took 3.428771ms waiting for pod "kube-controller-manager-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:34.948497  240293 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hdhfc" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:34.951783  240293 pod_ready.go:92] pod "kube-proxy-hdhfc" in "kube-system" namespace has status "Ready":"True"
	I0816 22:27:34.951796  240293 pod_ready.go:81] duration metric: took 3.294223ms waiting for pod "kube-proxy-hdhfc" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:34.951803  240293 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:35.136382  240293 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210816221913-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:27:35.136401  240293 pod_ready.go:81] duration metric: took 184.590897ms waiting for pod "kube-scheduler-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:35.136410  240293 pod_ready.go:38] duration metric: took 10.281557269s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:27:35.136426  240293 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:27:35.136458  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:27:35.159861  240293 api_server.go:70] duration metric: took 10.444645521s to wait for apiserver process to appear ...
	I0816 22:27:35.159888  240293 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:27:35.159899  240293 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0816 22:27:35.164341  240293 api_server.go:265] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0816 22:27:35.165220  240293 api_server.go:139] control plane version: v1.21.3
	I0816 22:27:35.165240  240293 api_server.go:129] duration metric: took 5.346619ms to wait for apiserver health ...
	I0816 22:27:35.165249  240293 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:27:35.339424  240293 system_pods.go:59] 9 kube-system pods found
	I0816 22:27:35.339458  240293 system_pods.go:61] "coredns-558bd4d5db-4zdn7" [2f84c841-a28b-41d0-b586-228464908707] Running
	I0816 22:27:35.339466  240293 system_pods.go:61] "etcd-embed-certs-20210816221913-6487" [a4640dda-3e6e-4007-a02c-4fe349e1157a] Running
	I0816 22:27:35.339472  240293 system_pods.go:61] "kindnet-7xdmw" [b333f4e6-e17c-4af3-96d9-00d5c0a566e2] Running
	I0816 22:27:35.339478  240293 system_pods.go:61] "kube-apiserver-embed-certs-20210816221913-6487" [3128b6ca-a978-4b60-b0af-573b750063c5] Running
	I0816 22:27:35.339485  240293 system_pods.go:61] "kube-controller-manager-embed-certs-20210816221913-6487" [ceb2b7da-4e1b-4cb9-a330-1d8e9ecc342f] Running
	I0816 22:27:35.339492  240293 system_pods.go:61] "kube-proxy-hdhfc" [785f8c4d-6231-44db-b97e-547d011c5c80] Running
	I0816 22:27:35.339497  240293 system_pods.go:61] "kube-scheduler-embed-certs-20210816221913-6487" [76384600-2c2f-4d18-b402-b66a7166b31d] Running
	I0816 22:27:35.339509  240293 system_pods.go:61] "metrics-server-7c784ccb57-jlfzn" [9f60dfdc-77e8-4afc-b707-d6f56ebe4cbd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:27:35.339527  240293 system_pods.go:61] "storage-provisioner" [58f573d9-42f2-462f-bb9b-966bd46af856] Running
	I0816 22:27:35.339535  240293 system_pods.go:74] duration metric: took 174.279391ms to wait for pod list to return data ...
	I0816 22:27:35.339548  240293 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:27:35.536578  240293 default_sa.go:45] found service account: "default"
	I0816 22:27:35.536602  240293 default_sa.go:55] duration metric: took 197.045764ms for default service account to be created ...
	I0816 22:27:35.536610  240293 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:27:35.738632  240293 system_pods.go:86] 9 kube-system pods found
	I0816 22:27:35.738661  240293 system_pods.go:89] "coredns-558bd4d5db-4zdn7" [2f84c841-a28b-41d0-b586-228464908707] Running
	I0816 22:27:35.738666  240293 system_pods.go:89] "etcd-embed-certs-20210816221913-6487" [a4640dda-3e6e-4007-a02c-4fe349e1157a] Running
	I0816 22:27:35.738671  240293 system_pods.go:89] "kindnet-7xdmw" [b333f4e6-e17c-4af3-96d9-00d5c0a566e2] Running
	I0816 22:27:35.738675  240293 system_pods.go:89] "kube-apiserver-embed-certs-20210816221913-6487" [3128b6ca-a978-4b60-b0af-573b750063c5] Running
	I0816 22:27:35.738681  240293 system_pods.go:89] "kube-controller-manager-embed-certs-20210816221913-6487" [ceb2b7da-4e1b-4cb9-a330-1d8e9ecc342f] Running
	I0816 22:27:35.738685  240293 system_pods.go:89] "kube-proxy-hdhfc" [785f8c4d-6231-44db-b97e-547d011c5c80] Running
	I0816 22:27:35.738689  240293 system_pods.go:89] "kube-scheduler-embed-certs-20210816221913-6487" [76384600-2c2f-4d18-b402-b66a7166b31d] Running
	I0816 22:27:35.738695  240293 system_pods.go:89] "metrics-server-7c784ccb57-jlfzn" [9f60dfdc-77e8-4afc-b707-d6f56ebe4cbd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:27:35.738700  240293 system_pods.go:89] "storage-provisioner" [58f573d9-42f2-462f-bb9b-966bd46af856] Running
	I0816 22:27:35.738707  240293 system_pods.go:126] duration metric: took 202.09278ms to wait for k8s-apps to be running ...
	I0816 22:27:35.738724  240293 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:27:35.738761  240293 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:27:35.748257  240293 system_svc.go:56] duration metric: took 9.52848ms WaitForService to wait for kubelet.
	I0816 22:27:35.748278  240293 kubeadm.go:547] duration metric: took 11.033066699s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:27:35.748301  240293 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:27:35.936039  240293 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:27:35.936064  240293 node_conditions.go:123] node cpu capacity is 8
	I0816 22:27:35.936078  240293 node_conditions.go:105] duration metric: took 187.771781ms to run NodePressure ...
	I0816 22:27:35.936087  240293 start.go:231] waiting for startup goroutines ...
	I0816 22:27:35.979326  240293 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0816 22:27:35.981602  240293 out.go:177] * Done! kubectl is now configured to use "embed-certs-20210816221913-6487" cluster and "default" namespace by default
	I0816 22:27:34.495967  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:36.995351  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:38.995682  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:41.495818  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:43.496112  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Mon 2021-08-16 22:21:04 UTC, end at Mon 2021-08-16 22:27:46 UTC. --
	Aug 16 22:26:42 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:42.769060255Z" level=info msg="Image k8s.gcr.io/echoserver:1.4 not found" id=1fe52a26-1204-4d66-b5e1-f86108586e52 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:42 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:42.769470375Z" level=info msg="Pulling image: k8s.gcr.io/echoserver:1.4" id=75b240ca-01e9-46c7-b9fa-c328be0906b7 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 16 22:26:42 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:42.771719747Z" level=info msg="Trying to access \"k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:26:42 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:42.971348298Z" level=info msg="Trying to access \"k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:26:47 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:47.829806594Z" level=info msg="Pulled image: k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" id=75b240ca-01e9-46c7-b9fa-c328be0906b7 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 16 22:26:47 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:47.830645861Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=1050cdf7-3bdd-405e-9429-2f7f716bd3d1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:47 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:47.832028399Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=1050cdf7-3bdd-405e-9429-2f7f716bd3d1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:47 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:47.832794459Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-swsbn/dashboard-metrics-scraper" id=fc97aa62-5e94-4aa7-b1cb-456d16eac579 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:26:47 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:47.997514599Z" level=info msg="Created container afde3610c9869490a7f3366a1ce25fb4e93b0b2dc25e4bf4ad303628782ef94a: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-swsbn/dashboard-metrics-scraper" id=fc97aa62-5e94-4aa7-b1cb-456d16eac579 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:26:47 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:47.998036077Z" level=info msg="Starting container: afde3610c9869490a7f3366a1ce25fb4e93b0b2dc25e4bf4ad303628782ef94a" id=b61fa1f3-5a93-4f75-911d-40df0a61ef2d name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:26:48 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:48.021526225Z" level=info msg="Started container afde3610c9869490a7f3366a1ce25fb4e93b0b2dc25e4bf4ad303628782ef94a: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-swsbn/dashboard-metrics-scraper" id=b61fa1f3-5a93-4f75-911d-40df0a61ef2d name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:26:48 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:48.549395063Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=ea8d175d-831e-4d16-9099-2fcda88e5b1f name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:48 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:48.551268934Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ea8d175d-831e-4d16-9099-2fcda88e5b1f name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:48 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:48.551865308Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=bebb2ae6-c93f-4d46-9987-5157ce55f6f5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:48 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:48.553362492Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=bebb2ae6-c93f-4d46-9987-5157ce55f6f5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:48 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:48.554088163Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-swsbn/dashboard-metrics-scraper" id=13d383a2-756f-46d4-acf2-b4cbccf52a39 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:26:48 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:48.703445367Z" level=info msg="Created container 8ab6c5882403dd54d5afc307dfb53f78df4f2f9a172130291e0fde8b2dd0d34f: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-swsbn/dashboard-metrics-scraper" id=13d383a2-756f-46d4-acf2-b4cbccf52a39 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:26:48 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:48.703981461Z" level=info msg="Starting container: 8ab6c5882403dd54d5afc307dfb53f78df4f2f9a172130291e0fde8b2dd0d34f" id=42cb85d5-f3e0-4da6-845b-b75473d68959 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:26:48 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:48.728797601Z" level=info msg="Started container 8ab6c5882403dd54d5afc307dfb53f78df4f2f9a172130291e0fde8b2dd0d34f: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-swsbn/dashboard-metrics-scraper" id=42cb85d5-f3e0-4da6-845b-b75473d68959 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:26:49 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:49.552835458Z" level=info msg="Removing container: afde3610c9869490a7f3366a1ce25fb4e93b0b2dc25e4bf4ad303628782ef94a" id=5545679f-29e2-4bba-94a8-6485ead42087 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 16 22:26:49 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:49.588498443Z" level=info msg="Removed container afde3610c9869490a7f3366a1ce25fb4e93b0b2dc25e4bf4ad303628782ef94a: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-swsbn/dashboard-metrics-scraper" id=5545679f-29e2-4bba-94a8-6485ead42087 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 16 22:26:53 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:53.505570038Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=75b55660-3915-443e-abfa-132b2d86d7eb name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:53 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:53.505884677Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=75b55660-3915-443e-abfa-132b2d86d7eb name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:26:53 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:53.506439374Z" level=info msg="Pulling image: fake.domain/k8s.gcr.io/echoserver:1.4" id=af9c8952-fa83-478f-936b-c1c62b2d5de8 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 16 22:26:53 default-k8s-different-port-20210816221939-6487 crio[243]: time="2021-08-16 22:26:53.517004589Z" level=info msg="Trying to access \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                        ATTEMPT             POD ID
	8ab6c5882403d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   57 seconds ago       Exited              dashboard-metrics-scraper   1                   5c3713ecb972b
	79f989d1f87df       9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db   About a minute ago   Running             kubernetes-dashboard        0                   4d3db69d8b8ff
	cc25061645005       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   About a minute ago   Exited              storage-provisioner         0                   c9fffbe595965
	9913deebfe52a       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   About a minute ago   Running             coredns                     0                   c72232d083282
	9b088643e3470       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb   About a minute ago   Running             kindnet-cni                 0                   9ed325eb35d3f
	4014814f8de6c       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   About a minute ago   Running             kube-proxy                  0                   3bf6028b9f9f4
	424818e3cd136       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   About a minute ago   Running             etcd                        0                   a56e9bdba8779
	8fda9c602da54       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   About a minute ago   Running             kube-controller-manager     0                   07858669fc1c0
	5f1c7a968a713       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   About a minute ago   Running             kube-scheduler              0                   5f7ab1a5d2a8b
	ed8f2fd04b802       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   About a minute ago   Running             kube-apiserver              0                   69b476c5677b4
	
	* 
	* ==> coredns [9913deebfe52a0cf2858d139bbde3a6115b0f3e565c62fc3705dbf7a8fe23971] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000003] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +1.763890] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000003] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +0.831977] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-6a14296a1513
	[  +0.000003] ll header: 00000000: 02 42 33 e7 ff 29 02 42 c0 a8 31 02 08 00        .B3..).B..1...
	[  +2.811776] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000002] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +2.832077] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000003] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +4.335384] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000003] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[Aug16 22:27] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000003] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[ +13.663740] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth55ef9b3c
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 06 37 a8 8c 4d 9e 08 06        .......7..M...
	[  +2.163880] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethb864e10f
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 26 f5 50 a9 1a cc 08 06        ......&.P.....
	[  +0.707561] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth9c8775f6
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 4a 1b 78 c1 d0 58 08 06        ......J.x..X..
	[  +0.000675] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth6f717d76
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff aa 3a da 18 32 b9 08 06        .......:..2...
	[ +12.646052] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000002] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[Aug16 22:28] cgroup: cgroup2: unknown option "nsdelegate"
	[  +3.979553] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [424818e3cd13674e399c36ecfdfa799fadb4897a5ee828f9351784d6deaf5547] <==
	* raft2021/08/16 22:26:17 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-16 22:26:17.830341 W | auth: simple token is not cryptographically signed
	2021-08-16 22:26:17.835242 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	2021-08-16 22:26:17.835370 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2021/08/16 22:26:17 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-16 22:26:17.835811 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2021-08-16 22:26:17.837782 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-16 22:26:17.837892 I | embed: listening for peers on 192.168.49.2:2380
	2021-08-16 22:26:17.837943 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/16 22:26:17 INFO: aec36adc501070cc is starting a new election at term 1
	raft2021/08/16 22:26:17 INFO: aec36adc501070cc became candidate at term 2
	raft2021/08/16 22:26:17 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2021/08/16 22:26:17 INFO: aec36adc501070cc became leader at term 2
	raft2021/08/16 22:26:17 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2021-08-16 22:26:17.928543 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-16 22:26:17.929278 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-16 22:26:17.929338 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-16 22:26:17.929373 I | embed: ready to serve client requests
	2021-08-16 22:26:17.929478 I | embed: ready to serve client requests
	2021-08-16 22:26:17.930592 I | etcdserver: published {Name:default-k8s-different-port-20210816221939-6487 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-16 22:26:17.932644 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-16 22:26:17.933111 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-16 22:26:36.227188 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:26:44.854469 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:26:54.854574 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  22:28:46 up  1:08,  0 users,  load average: 1.84, 2.35, 2.25
	Linux default-k8s-different-port-20210816221939-6487 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [ed8f2fd04b8028dc3b19dd83f0b06d817cfad8c6bb15b23ffd738c0796981129] <==
	* W0816 22:28:43.856424       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:28:44.156622       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:28:44.325695       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:28:44.399546       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:28:44.514291       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:28:45.155989       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:28:45.500914       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:28:45.945117       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:28:46.007572       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:28:46.017385       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:28:46.052700       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:28:46.057036       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:28:46.097810       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:28:46.110828       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:28:46.342686       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:28:46.405153       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	I0816 22:28:46.470846       1 trace.go:205] Trace[1825994392]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (16-Aug-2021 22:27:46.470) (total time: 60000ms):
	Trace[1825994392]: [1m0.000161545s] [1m0.000161545s] END
	E0816 22:28:46.470875       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
	E0816 22:28:46.470925       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0816 22:28:46.471982       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0816 22:28:46.473133       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0816 22:28:46.474527       1 trace.go:205] Trace[1880130197]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (16-Aug-2021 22:27:46.470) (total time: 60003ms):
	Trace[1880130197]: [1m0.003855917s] [1m0.003855917s] END
	W0816 22:28:46.492024       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	
	* 
	* ==> kube-controller-manager [8fda9c602da541a3efd623062eb9b12546ce7f7dbe30779b8dcc048fafb8e49d] <==
	* I0816 22:26:40.917674       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-lfkmq"
	I0816 22:26:41.132391       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0816 22:26:41.137636       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:26:41.144286       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	E0816 22:26:41.214906       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:26:41.219957       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:26:41.220034       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:26:41.220235       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:26:41.225299       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:26:41.225482       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:26:41.225549       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:26:41.229857       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:26:41.229875       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:26:41.316688       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-jmcw9"
	I0816 22:26:41.316805       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-swsbn"
	I0816 22:26:43.693464       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	E0816 22:27:08.842687       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:27:09.266493       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 22:27:38.861823       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:27:39.283373       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 22:28:07.717842       1 node_lifecycle_controller.go:1107] Error updating node default-k8s-different-port-20210816221939-6487: Timeout: request did not complete within requested timeout context deadline exceeded
	E0816 22:28:08.871706       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:28:09.298651       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 22:28:38.895146       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:28:39.313536       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [4014814f8de6c11880e36f0bb888b4734c3a928cbb02aaf16da299c541e2a01d] <==
	* I0816 22:26:39.398397       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0816 22:26:39.398449       1 server_others.go:140] Detected node IP 192.168.49.2
	W0816 22:26:39.398479       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0816 22:26:39.425786       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0816 22:26:39.425822       1 server_others.go:212] Using iptables Proxier.
	I0816 22:26:39.425835       1 server_others.go:219] creating dualStackProxier for iptables.
	W0816 22:26:39.425844       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0816 22:26:39.426182       1 server.go:643] Version: v1.21.3
	I0816 22:26:39.426941       1 config.go:315] Starting service config controller
	I0816 22:26:39.427020       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:26:39.427052       1 config.go:224] Starting endpoint slice config controller
	I0816 22:26:39.427070       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0816 22:26:39.430581       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0816 22:26:39.431603       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:26:39.528071       1 shared_informer.go:247] Caches are synced for service config 
	I0816 22:26:39.528209       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [5f1c7a968a7136b41987a6c94cc5f792b8425e5a3aacaefae69394da84ed0a4c] <==
	* I0816 22:26:22.115874       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 22:26:22.115950       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 22:26:22.120066       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0816 22:26:22.120156       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0816 22:26:22.132709       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:26:22.132945       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:26:22.133062       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:26:22.133181       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:26:22.133275       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:26:22.133389       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:26:22.133492       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:26:22.133485       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:26:22.133579       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:26:22.133649       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:26:22.133719       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:26:22.135285       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 22:26:22.139706       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:26:22.139865       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:26:23.043228       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:26:23.213299       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:26:23.263893       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:26:23.268967       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:26:23.313677       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:26:23.313779       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0816 22:26:23.616159       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:21:04 UTC, end at Mon 2021-08-16 22:28:46 UTC. --
	Aug 16 22:26:41 default-k8s-different-port-20210816221939-6487 kubelet[5490]: I0816 22:26:41.338564    5490 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0a7f8825-8975-4257-a108-92592ad8f017-tmp-volume\") pod \"kubernetes-dashboard-6fcdf4f6d-jmcw9\" (UID: \"0a7f8825-8975-4257-a108-92592ad8f017\") "
	Aug 16 22:26:41 default-k8s-different-port-20210816221939-6487 kubelet[5490]: I0816 22:26:41.338647    5490 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97t8q\" (UniqueName: \"kubernetes.io/projected/0a7f8825-8975-4257-a108-92592ad8f017-kube-api-access-97t8q\") pod \"kubernetes-dashboard-6fcdf4f6d-jmcw9\" (UID: \"0a7f8825-8975-4257-a108-92592ad8f017\") "
	Aug 16 22:26:41 default-k8s-different-port-20210816221939-6487 kubelet[5490]: I0816 22:26:41.338778    5490 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4a774169-234c-44ee-b3f8-5b8727ef0b8d-tmp-volume\") pod \"dashboard-metrics-scraper-8685c45546-swsbn\" (UID: \"4a774169-234c-44ee-b3f8-5b8727ef0b8d\") "
	Aug 16 22:26:41 default-k8s-different-port-20210816221939-6487 kubelet[5490]: I0816 22:26:41.338841    5490 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcglf\" (UniqueName: \"kubernetes.io/projected/4a774169-234c-44ee-b3f8-5b8727ef0b8d-kube-api-access-wcglf\") pod \"dashboard-metrics-scraper-8685c45546-swsbn\" (UID: \"4a774169-234c-44ee-b3f8-5b8727ef0b8d\") "
	Aug 16 22:26:42 default-k8s-different-port-20210816221939-6487 kubelet[5490]: E0816 22:26:42.154735    5490 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:26:42 default-k8s-different-port-20210816221939-6487 kubelet[5490]: E0816 22:26:42.154825    5490 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:26:42 default-k8s-different-port-20210816221939-6487 kubelet[5490]: E0816 22:26:42.155003    5490 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-v2m4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{
Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,Vol
umeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-lfkmq_kube-system(d9309c70-8cf5-4fdc-a79f-1c85f9ceda55): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host
	Aug 16 22:26:42 default-k8s-different-port-20210816221939-6487 kubelet[5490]: E0816 22:26:42.155077    5490 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-lfkmq" podUID=d9309c70-8cf5-4fdc-a79f-1c85f9ceda55
	Aug 16 22:26:42 default-k8s-different-port-20210816221939-6487 kubelet[5490]: E0816 22:26:42.536058    5490 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-lfkmq" podUID=d9309c70-8cf5-4fdc-a79f-1c85f9ceda55
	Aug 16 22:26:48 default-k8s-different-port-20210816221939-6487 kubelet[5490]: I0816 22:26:48.548933    5490 scope.go:111] "RemoveContainer" containerID="afde3610c9869490a7f3366a1ce25fb4e93b0b2dc25e4bf4ad303628782ef94a"
	Aug 16 22:26:49 default-k8s-different-port-20210816221939-6487 kubelet[5490]: I0816 22:26:49.551741    5490 scope.go:111] "RemoveContainer" containerID="afde3610c9869490a7f3366a1ce25fb4e93b0b2dc25e4bf4ad303628782ef94a"
	Aug 16 22:26:49 default-k8s-different-port-20210816221939-6487 kubelet[5490]: I0816 22:26:49.551872    5490 scope.go:111] "RemoveContainer" containerID="8ab6c5882403dd54d5afc307dfb53f78df4f2f9a172130291e0fde8b2dd0d34f"
	Aug 16 22:26:49 default-k8s-different-port-20210816221939-6487 kubelet[5490]: E0816 22:26:49.552267    5490 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-swsbn_kubernetes-dashboard(4a774169-234c-44ee-b3f8-5b8727ef0b8d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-swsbn" podUID=4a774169-234c-44ee-b3f8-5b8727ef0b8d
	Aug 16 22:26:50 default-k8s-different-port-20210816221939-6487 kubelet[5490]: I0816 22:26:50.554373    5490 scope.go:111] "RemoveContainer" containerID="8ab6c5882403dd54d5afc307dfb53f78df4f2f9a172130291e0fde8b2dd0d34f"
	Aug 16 22:26:50 default-k8s-different-port-20210816221939-6487 kubelet[5490]: E0816 22:26:50.554614    5490 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-swsbn_kubernetes-dashboard(4a774169-234c-44ee-b3f8-5b8727ef0b8d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-swsbn" podUID=4a774169-234c-44ee-b3f8-5b8727ef0b8d
	Aug 16 22:26:50 default-k8s-different-port-20210816221939-6487 kubelet[5490]: E0816 22:26:50.943560    5490 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/05d52cd72bdcf94c00d8b611c45d4aeff6bddfed7127e78286ad1e981b86d2f2/docker/05d52cd72bdcf94c00d8b611c45d4aeff6bddfed7127e78286ad1e981b86d2f2\": RecentStats: unable to find data in memory cache]"
	Aug 16 22:26:51 default-k8s-different-port-20210816221939-6487 kubelet[5490]: I0816 22:26:51.555665    5490 scope.go:111] "RemoveContainer" containerID="8ab6c5882403dd54d5afc307dfb53f78df4f2f9a172130291e0fde8b2dd0d34f"
	Aug 16 22:26:51 default-k8s-different-port-20210816221939-6487 kubelet[5490]: E0816 22:26:51.555949    5490 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-swsbn_kubernetes-dashboard(4a774169-234c-44ee-b3f8-5b8727ef0b8d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-swsbn" podUID=4a774169-234c-44ee-b3f8-5b8727ef0b8d
	Aug 16 22:26:53 default-k8s-different-port-20210816221939-6487 kubelet[5490]: E0816 22:26:53.521420    5490 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:26:53 default-k8s-different-port-20210816221939-6487 kubelet[5490]: E0816 22:26:53.521460    5490 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:26:53 default-k8s-different-port-20210816221939-6487 kubelet[5490]: E0816 22:26:53.521586    5490 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-v2m4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{
Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,Vol
umeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-lfkmq_kube-system(d9309c70-8cf5-4fdc-a79f-1c85f9ceda55): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host
	Aug 16 22:26:53 default-k8s-different-port-20210816221939-6487 kubelet[5490]: E0816 22:26:53.521628    5490 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-lfkmq" podUID=d9309c70-8cf5-4fdc-a79f-1c85f9ceda55
	Aug 16 22:26:57 default-k8s-different-port-20210816221939-6487 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 16 22:26:57 default-k8s-different-port-20210816221939-6487 systemd[1]: kubelet.service: Succeeded.
	Aug 16 22:26:57 default-k8s-different-port-20210816221939-6487 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [79f989d1f87dfbbfdcfeb997afdb759885178f62c07cdaf29baf001761967c6d] <==
	* 2021/08/16 22:26:42 Using namespace: kubernetes-dashboard
	2021/08/16 22:26:42 Using in-cluster config to connect to apiserver
	2021/08/16 22:26:42 Using secret token for csrf signing
	2021/08/16 22:26:42 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/16 22:26:42 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/16 22:26:42 Successful initial request to the apiserver, version: v1.21.3
	2021/08/16 22:26:42 Generating JWE encryption key
	2021/08/16 22:26:42 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/16 22:26:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/16 22:26:43 Initializing JWE encryption key from synchronized object
	2021/08/16 22:26:43 Creating in-cluster Sidecar client
	2021/08/16 22:26:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:26:43 Serving insecurely on HTTP port: 9090
	2021/08/16 22:27:36 Metric client health check failed: an error on the server ("unknown") has prevented the request from succeeding (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:26:42 Starting overwatch
	
	* 
	* ==> storage-provisioner [cc250616450056102bb06e4b2a7a294752cbf8992bdf38ffd039f70b9bf5d938] <==
	* k8s.io/client-go/util/workqueue.(*Type).Get(0xc000181ce0, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc0001aef00, 0x18e5530, 0xc00004a100, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000300100)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000300100, 0x18b3d60, 0xc0002741e0, 0x1, 0xc00010a1e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000300100, 0x3b9aca00, 0x0, 0x1, 0xc00010a1e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc000300100, 0x3b9aca00, 0xc00010a1e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	goroutine 92 [runnable]:
	k8s.io/client-go/tools/record.(*recorderImpl).generateEvent.func1(0xc0001a8940, 0xc0001ae000)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/tools/record/event.go:341
	created by k8s.io/client-go/tools/record.(*recorderImpl).generateEvent
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/tools/record/event.go:341 +0x3b7
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:28:46.474445  276026 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: "\n** stderr ** \nError from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Pause (109.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (24.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20210816221913-6487 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-20210816221913-6487 --alsologtostderr -v=1: exit status 80 (1.934980091s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-20210816221913-6487 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 22:27:46.752379  276304 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:27:46.752494  276304 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:27:46.752503  276304 out.go:311] Setting ErrFile to fd 2...
	I0816 22:27:46.752507  276304 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:27:46.752622  276304 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:27:46.752810  276304 out.go:305] Setting JSON to false
	I0816 22:27:46.752829  276304 mustload.go:65] Loading cluster: embed-certs-20210816221913-6487
	I0816 22:27:46.753216  276304 config.go:177] Loaded profile config "embed-certs-20210816221913-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 22:27:46.753616  276304 cli_runner.go:115] Run: docker container inspect embed-certs-20210816221913-6487 --format={{.State.Status}}
	I0816 22:27:46.794040  276304 host.go:66] Checking if "embed-certs-20210816221913-6487" exists ...
	I0816 22:27:46.794780  276304 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-20210816221913-6487 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0816 22:27:46.797308  276304 out.go:177] * Pausing node embed-certs-20210816221913-6487 ... 
	I0816 22:27:46.797337  276304 host.go:66] Checking if "embed-certs-20210816221913-6487" exists ...
	I0816 22:27:46.797534  276304 ssh_runner.go:149] Run: systemctl --version
	I0816 22:27:46.797565  276304 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:27:46.838530  276304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32959 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:27:46.931890  276304 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:27:46.940641  276304 pause.go:50] kubelet running: true
	I0816 22:27:46.940684  276304 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0816 22:27:47.087462  276304 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0816 22:27:47.087534  276304 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0816 22:27:47.155153  276304 cri.go:76] found id: "debf85165af7410de1807f28a939ac729e5268c28dd7659f8c7023882c3ca649"
	I0816 22:27:47.155180  276304 cri.go:76] found id: "0a0ea4978ab6cd1089b9d06f0a278a1b0505d7d08360f002b22ad418383c54c6"
	I0816 22:27:47.155187  276304 cri.go:76] found id: "f33617d33e5848e02ba3ee3dd66489c253c623409536df43768ae928c1fcbc88"
	I0816 22:27:47.155193  276304 cri.go:76] found id: "0aa147fb5100088ef2b57609d0f2fcf92c5d3be6e41da177a1be6f4523451318"
	I0816 22:27:47.155206  276304 cri.go:76] found id: "8bf1c59231af08989a5db07139984cfdf2c0c9cf6fdc1d6e5bbf3f4a03bc5362"
	I0816 22:27:47.155211  276304 cri.go:76] found id: "aa576929be7b6ab6d22ef3cb64aa2fa59c7f5ca84a125be69a9a4b61bf1a0ef7"
	I0816 22:27:47.155215  276304 cri.go:76] found id: "ec5f895255549ca13063d25ca771f194444cccce85d22ec196e923f9c0520e16"
	I0816 22:27:47.155218  276304 cri.go:76] found id: "34a8effb725d15ebb6c34b6f90d57dbc544e5ec2f8403d07ec1e1f6196fc373a"
	I0816 22:27:47.155222  276304 cri.go:76] found id: "9e1ff151b67d583fb87bc64bdf44294c0a67ed2e20cf36a0baf2edc7843918cf"
	I0816 22:27:47.155229  276304 cri.go:76] found id: "47dc2b7452e70a03e9169b7a6aa763bf2f79612d522d7f9b27b49e095ea2773d"
	I0816 22:27:47.155235  276304 cri.go:76] found id: ""
	I0816 22:27:47.155270  276304 ssh_runner.go:149] Run: sudo runc list -f json

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p embed-certs-20210816221913-6487 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect embed-certs-20210816221913-6487
helpers_test.go:236: (dbg) docker inspect embed-certs-20210816221913-6487:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4e30df1bcd775444ec06b0393f66bea19168ae6b47cc3b771a3a176b222b381f",
	        "Created": "2021-08-16T22:19:14.835612448Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 240568,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-16T22:21:17.267754297Z",
	            "FinishedAt": "2021-08-16T22:21:14.948914507Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/4e30df1bcd775444ec06b0393f66bea19168ae6b47cc3b771a3a176b222b381f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4e30df1bcd775444ec06b0393f66bea19168ae6b47cc3b771a3a176b222b381f/hostname",
	        "HostsPath": "/var/lib/docker/containers/4e30df1bcd775444ec06b0393f66bea19168ae6b47cc3b771a3a176b222b381f/hosts",
	        "LogPath": "/var/lib/docker/containers/4e30df1bcd775444ec06b0393f66bea19168ae6b47cc3b771a3a176b222b381f/4e30df1bcd775444ec06b0393f66bea19168ae6b47cc3b771a3a176b222b381f-json.log",
	        "Name": "/embed-certs-20210816221913-6487",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20210816221913-6487:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20210816221913-6487",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2f22434b154c126cc38d201a479347972bc497d40dabe8fdb45932c210d3a268-init/diff:/var/lib/docker/overlay2/26ccb05b45c39eb8fb9a222c35182ae0b2281818311893cc1c820d93f21711ae/diff:/var/lib/docker/overlay2/cd717b1332cc33945d2b9d777346faa34d6bdfa47023daf36a4373532eccf421/diff:/var/lib/docker/overlay2/d9b45e6cc99a52e8f81dfe312587e9103ceac6c138634c90ec0d647cd90dcdc6/diff:/var/lib/docker/overlay2/253a269adcaa7871d92fb351ceb4d32421104cd62f741ba1c63d8e3f2e19a0b5/diff:/var/lib/docker/overlay2/42adedc2a706910fdba591e62cbfab57dd03d85b882f6eef38c3e131a0c52bea/diff:/var/lib/docker/overlay2/77ed2008192a16e91da5c6b9ccc6e71cea338a7cb24d331dc695e5f74fb04573/diff:/var/lib/docker/overlay2/29dccf868a296ce0889c929159fd3ac0dcdc00ba6e2ef864b056335954ffab4d/diff:/var/lib/docker/overlay2/592818f0a9ab8c99c2438562d9c5a479083cb08f41ad30ddefa9b493a8098150/diff:/var/lib/docker/overlay2/05fa44df096fbdb60f935bef69e952369e10cd8f1adc570c170c1f08d09743f7/diff:/var/lib/docker/overlay2/65cf48
1c471f231a663edc6331272317a851b0fd3395b2d6be43899f9b5443f9/diff:/var/lib/docker/overlay2/86a49202a7a2abea1a7a57c52f2e19cf991e9319b11dcf7f4105892413e0096e/diff:/var/lib/docker/overlay2/6f1a2a1805e90af512eef163f484935d57b0a0a72a65b5775c38efb5fe937d13/diff:/var/lib/docker/overlay2/f514eb992d78367550d2ce1c020771f984eff047302d6af3dcd71172270c0087/diff:/var/lib/docker/overlay2/cdae383a0302a2ed835985028caad813d6e16c7caf0e146005b47a4a1e94369b/diff:/var/lib/docker/overlay2/1fe02987c4f18db439e026431c44d1b8b233da766c4dcd331a4c20fd91e88748/diff:/var/lib/docker/overlay2/28bb6b828668bb682f3523f9f71fbd429ef6180cd98455625ea014ef4e532b77/diff:/var/lib/docker/overlay2/2878ae01f9dccb219097c2fc5873be677fe8a70e2bd936d25f018483a3d6c1a3/diff:/var/lib/docker/overlay2/da86efa9bad2d8b46c88440b1c4864add391f7cc9ee07d4227dd37d76c9ffd85/diff:/var/lib/docker/overlay2/93bf17ee1f2be70c70aa38b3aea9daa3a62e189181d1aa56acc52f5cf90f7436/diff:/var/lib/docker/overlay2/e8db603cd7ef789cc310bf7782375ee624d80ac0bceb3a3075900e67540c152a/diff:/var/lib/d
ocker/overlay2/1ba1c50d72480f49b835b34db6c3b21bccdf6530a8201631099a7e287dbf94a6/diff:/var/lib/docker/overlay2/2eeb0aecf508bc4a797d240b330bfa5b54b84a085737a96dad9faf9ea129d4ce/diff:/var/lib/docker/overlay2/40ba0dc970df76cce520350594f86384e121b47a190106559752e8f7cf138540/diff:/var/lib/docker/overlay2/8b80e2d0e53fb136919fd9ca7d4d198a92c324a19aa1827fcd672a3aa49a0dcc/diff:/var/lib/docker/overlay2/bd1d10c9a8876242ad61e3841c439177c3ae3a1967e5b053729107676134c622/diff:/var/lib/docker/overlay2/1f65ee5880c0ce6e027f2e72118cb956fcdcba9bb0806d98de6532bcbdfac6dd/diff:/var/lib/docker/overlay2/8bde131df5f26fa8ad65d4ddcb9ddb84de567427ffeb4aee76487489e530ef77/diff:/var/lib/docker/overlay2/73a68cda04f65298e5bd2581f0f16a8de96cd42145968d5c39b0c8ae3228a423/diff:/var/lib/docker/overlay2/0acfb650c0dfd3b9df005fe48f305b155d61d21914271c258e95bf42636e9bf8/diff:/var/lib/docker/overlay2/33e44b42ef88ffe285cb67e240248a629167dd1c83362d51a67679cbd866c30b/diff:/var/lib/docker/overlay2/c71ddf9e0475e4a3987bf3d616723e9d055a79c4bb6d92cf34fc67be2d9
c5134/diff:/var/lib/docker/overlay2/faece61d8cf45bab42cd499befcdb8d4548229311b35600cced068a3ce186025/diff:/var/lib/docker/overlay2/5e45db6ab7620f8baf25ff5ff01c806cd64575cdc89e5bb2965a25800093d5f9/diff:/var/lib/docker/overlay2/7324fe47bf39776cd61539f09f63fbb6f5e17aedc985bcc48bc12cdaaff4c4e4/diff:/var/lib/docker/overlay2/98a50fa9fa33654b2ed4ce8f7118f04d7004260519abca0d403b2ab337e9c273/diff:/var/lib/docker/overlay2/0e38707c109f3f1c54c7190021d525f898a86b07f2e51c86dc3e6d916eff3ed7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2f22434b154c126cc38d201a479347972bc497d40dabe8fdb45932c210d3a268/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2f22434b154c126cc38d201a479347972bc497d40dabe8fdb45932c210d3a268/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2f22434b154c126cc38d201a479347972bc497d40dabe8fdb45932c210d3a268/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20210816221913-6487",
	                "Source": "/var/lib/docker/volumes/embed-certs-20210816221913-6487/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20210816221913-6487",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20210816221913-6487",
	                "name.minikube.sigs.k8s.io": "embed-certs-20210816221913-6487",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "66c932cda8a09008d47d9a2d61331c28459055d0ba616daa818b44be955c6ed2",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32959"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32958"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32955"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32957"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32956"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/66c932cda8a0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20210816221913-6487": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "4e30df1bcd77"
	                    ],
	                    "NetworkID": "c3bd6b7609c0d09834ebe1c44b095ba7758b47f6dd42c7201a8fb39db16dfef9",
	                    "EndpointID": "1d8c47e872253088d5b9fa469cb566174b22dabfe6132ec5b8774d84dcb15b24",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210816221913-6487 -n embed-certs-20210816221913-6487
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210816221913-6487 -n embed-certs-20210816221913-6487: exit status 2 (310.611019ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20210816221913-6487 logs -n 25
E0816 22:27:51.831074    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816221555-6487/client.crt: no such file or directory
E0816 22:27:51.836340    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816221555-6487/client.crt: no such file or directory
E0816 22:27:51.846551    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816221555-6487/client.crt: no such file or directory
E0816 22:27:51.866765    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816221555-6487/client.crt: no such file or directory
E0816 22:27:51.906984    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816221555-6487/client.crt: no such file or directory
E0816 22:27:51.987278    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816221555-6487/client.crt: no such file or directory
E0816 22:27:52.147643    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816221555-6487/client.crt: no such file or directory
E0816 22:27:52.468175    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816221555-6487/client.crt: no such file or directory
E0816 22:27:53.109073    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816221555-6487/client.crt: no such file or directory
E0816 22:27:54.389558    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816221555-6487/client.crt: no such file or directory
E0816 22:27:56.949860    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816221555-6487/client.crt: no such file or directory
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p embed-certs-20210816221913-6487 logs -n 25: exit status 110 (10.842230398s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                         | disable-driver-mounts-20210816221938-6487      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:38 UTC | Mon, 16 Aug 2021 22:19:39 UTC |
	|         | disable-driver-mounts-20210816221938-6487                  |                                                |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:39 UTC | Mon, 16 Aug 2021 22:20:32 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:40 UTC | Mon, 16 Aug 2021 22:20:41 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:13 UTC | Mon, 16 Aug 2021 22:20:45 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:53 UTC | Mon, 16 Aug 2021 22:20:54 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:41 UTC | Mon, 16 Aug 2021 22:21:02 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:02 UTC | Mon, 16 Aug 2021 22:21:02 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| stop    | -p                                                         | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:54 UTC | Mon, 16 Aug 2021 22:21:15 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:15 UTC | Mon, 16 Aug 2021 22:21:15 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210816221555-6487                          | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:18:20 UTC | Mon, 16 Aug 2021 22:24:11 UTC |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:26 UTC | Mon, 16 Aug 2021 22:24:26 UTC |
	|         | no-preload-20210816221555-6487                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| -p      | no-preload-20210816221555-6487                             | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:29 UTC | Mon, 16 Aug 2021 22:24:29 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | no-preload-20210816221555-6487                             | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:30 UTC | Mon, 16 Aug 2021 22:24:31 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:32 UTC | Mon, 16 Aug 2021 22:24:35 UTC |
	|         | no-preload-20210816221555-6487                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:36 UTC | Mon, 16 Aug 2021 22:24:36 UTC |
	|         | no-preload-20210816221555-6487                             |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20210816222436-6487 --memory=2200            | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:36 UTC | Mon, 16 Aug 2021 22:25:25 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:25 UTC | Mon, 16 Aug 2021 22:25:25 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:25 UTC | Mon, 16 Aug 2021 22:25:46 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:46 UTC | Mon, 16 Aug 2021 22:25:46 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20210816222436-6487 --memory=2200            | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:46 UTC | Mon, 16 Aug 2021 22:26:11 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:26:12 UTC | Mon, 16 Aug 2021 22:26:12 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:02 UTC | Mon, 16 Aug 2021 22:26:46 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:26:56 UTC | Mon, 16 Aug 2021 22:26:57 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:15 UTC | Mon, 16 Aug 2021 22:27:35 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:27:46 UTC | Mon, 16 Aug 2021 22:27:46 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:25:46
	Running on machine: debian-jenkins-agent-13
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:25:46.856773  262957 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:25:46.856848  262957 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:25:46.856858  262957 out.go:311] Setting ErrFile to fd 2...
	I0816 22:25:46.856861  262957 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:25:46.856963  262957 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:25:46.857212  262957 out.go:305] Setting JSON to false
	I0816 22:25:46.893957  262957 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-13","uptime":3914,"bootTime":1629148833,"procs":365,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:25:46.894067  262957 start.go:121] virtualization: kvm guest
	I0816 22:25:46.896379  262957 out.go:177] * [newest-cni-20210816222436-6487] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:25:46.897973  262957 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:25:46.896522  262957 notify.go:169] Checking for updates...
	I0816 22:25:46.899468  262957 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:25:46.900988  262957 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:25:46.902492  262957 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:25:46.902900  262957 config.go:177] Loaded profile config "newest-cni-20210816222436-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0816 22:25:46.903274  262957 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:25:46.950656  262957 docker.go:132] docker version: linux-19.03.15
	I0816 22:25:46.950732  262957 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:25:47.034524  262957 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-16 22:25:46.986320519 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 22:25:47.034654  262957 docker.go:244] overlay module found
	I0816 22:25:47.037282  262957 out.go:177] * Using the docker driver based on existing profile
	I0816 22:25:47.037307  262957 start.go:278] selected driver: docker
	I0816 22:25:47.037313  262957 start.go:751] validating driver "docker" against &{Name:newest-cni-20210816222436-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816222436-6487 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true
extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:25:47.037417  262957 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0816 22:25:47.037459  262957 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:25:47.037480  262957 out.go:242] ! Your cgroup does not allow setting memory.
	I0816 22:25:47.039083  262957 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:25:47.040150  262957 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:25:47.119162  262957 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-16 22:25:47.075605257 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0816 22:25:47.119274  262957 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:25:47.119298  262957 out.go:242] ! Your cgroup does not allow setting memory.
	I0816 22:25:47.121212  262957 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:25:47.121330  262957 start_flags.go:716] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0816 22:25:47.121355  262957 cni.go:93] Creating CNI manager for ""
	I0816 22:25:47.121364  262957 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:25:47.121376  262957 start_flags.go:277] config:
	{Name:newest-cni-20210816222436-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816222436-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m
0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:25:47.123081  262957 out.go:177] * Starting control plane node newest-cni-20210816222436-6487 in cluster newest-cni-20210816222436-6487
	I0816 22:25:47.123113  262957 cache.go:117] Beginning downloading kic base image for docker with crio
	I0816 22:25:47.124788  262957 out.go:177] * Pulling base image ...
	I0816 22:25:47.124814  262957 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0816 22:25:47.124838  262957 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0816 22:25:47.124853  262957 cache.go:56] Caching tarball of preloaded images
	I0816 22:25:47.124910  262957 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0816 22:25:47.125039  262957 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 22:25:47.125058  262957 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on crio
	I0816 22:25:47.125170  262957 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/config.json ...
	I0816 22:25:47.212531  262957 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0816 22:25:47.212557  262957 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0816 22:25:47.212577  262957 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:25:47.212610  262957 start.go:313] acquiring machines lock for newest-cni-20210816222436-6487: {Name:mkd90dd1df90e2f23e61f524a3ae6e1a65dd1b39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:47.212710  262957 start.go:317] acquired machines lock for "newest-cni-20210816222436-6487" in 80.626µs
	I0816 22:25:47.212739  262957 start.go:93] Skipping create...Using existing machine configuration
	I0816 22:25:47.212748  262957 fix.go:55] fixHost starting: 
	I0816 22:25:47.212988  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:25:47.251771  262957 fix.go:108] recreateIfNeeded on newest-cni-20210816222436-6487: state=Stopped err=<nil>
	W0816 22:25:47.251798  262957 fix.go:134] unexpected machine state, will restart: <nil>
	I0816 22:25:44.995113  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:46.995369  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:47.650229  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:49.650872  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:47.254057  262957 out.go:177] * Restarting existing docker container for "newest-cni-20210816222436-6487" ...
	I0816 22:25:47.254120  262957 cli_runner.go:115] Run: docker start newest-cni-20210816222436-6487
	I0816 22:25:48.586029  262957 cli_runner.go:168] Completed: docker start newest-cni-20210816222436-6487: (1.33187871s)
	I0816 22:25:48.586111  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:25:48.626755  262957 kic.go:420] container "newest-cni-20210816222436-6487" state is running.
	I0816 22:25:48.627256  262957 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210816222436-6487
	I0816 22:25:48.670009  262957 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/config.json ...
	I0816 22:25:48.670233  262957 machine.go:88] provisioning docker machine ...
	I0816 22:25:48.670255  262957 ubuntu.go:169] provisioning hostname "newest-cni-20210816222436-6487"
	I0816 22:25:48.670309  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:48.711043  262957 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:48.711197  262957 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32969 <nil> <nil>}
	I0816 22:25:48.711217  262957 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20210816222436-6487 && echo "newest-cni-20210816222436-6487" | sudo tee /etc/hostname
	I0816 22:25:48.711815  262957 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48490->127.0.0.1:32969: read: connection reset by peer
	I0816 22:25:49.495358  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:51.496029  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:53.497145  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:51.907195  262957 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20210816222436-6487
	
	I0816 22:25:51.907262  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:51.946396  262957 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:51.946596  262957 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32969 <nil> <nil>}
	I0816 22:25:51.946627  262957 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20210816222436-6487' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20210816222436-6487/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20210816222436-6487' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:25:52.071168  262957 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:25:52.071196  262957 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem ServerCertRemotePath:/e
tc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:25:52.071223  262957 ubuntu.go:177] setting up certificates
	I0816 22:25:52.071234  262957 provision.go:83] configureAuth start
	I0816 22:25:52.071275  262957 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210816222436-6487
	I0816 22:25:52.110550  262957 provision.go:138] copyHostCerts
	I0816 22:25:52.110621  262957 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:25:52.110633  262957 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:25:52.110696  262957 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:25:52.110798  262957 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:25:52.110811  262957 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:25:52.110835  262957 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:25:52.110969  262957 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:25:52.110981  262957 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:25:52.111006  262957 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0816 22:25:52.111059  262957 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20210816222436-6487 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20210816222436-6487]
	I0816 22:25:52.355600  262957 provision.go:172] copyRemoteCerts
	I0816 22:25:52.355664  262957 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:25:52.355720  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:52.396113  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:25:52.486667  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:25:52.503265  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0816 22:25:52.518138  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 22:25:52.533106  262957 provision.go:86] duration metric: configureAuth took 461.862959ms
	I0816 22:25:52.533124  262957 ubuntu.go:193] setting minikube options for container-runtime
	I0816 22:25:52.533292  262957 config.go:177] Loaded profile config "newest-cni-20210816222436-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0816 22:25:52.533391  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:52.573329  262957 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:52.573496  262957 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32969 <nil> <nil>}
	I0816 22:25:52.573517  262957 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 22:25:52.991954  262957 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 22:25:52.991986  262957 machine.go:91] provisioned docker machine in 4.321739549s
	I0816 22:25:52.991996  262957 start.go:267] post-start starting for "newest-cni-20210816222436-6487" (driver="docker")
	I0816 22:25:52.992007  262957 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:25:52.992069  262957 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:25:52.992113  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:53.032158  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:25:53.123013  262957 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:25:53.125495  262957 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0816 22:25:53.125515  262957 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0816 22:25:53.125523  262957 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0816 22:25:53.125528  262957 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0816 22:25:53.125536  262957 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:25:53.125574  262957 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:25:53.125646  262957 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem -> 64872.pem in /etc/ssl/certs
	I0816 22:25:53.125746  262957 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:25:53.131911  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:25:53.149155  262957 start.go:270] post-start completed in 157.141514ms
	I0816 22:25:53.149220  262957 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 22:25:53.149270  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:53.190433  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:25:53.275867  262957 fix.go:57] fixHost completed within 6.063112205s
	I0816 22:25:53.275893  262957 start.go:80] releasing machines lock for "newest-cni-20210816222436-6487", held for 6.063163627s
	I0816 22:25:53.275995  262957 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210816222436-6487
	I0816 22:25:53.317483  262957 ssh_runner.go:149] Run: systemctl --version
	I0816 22:25:53.317538  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:53.317562  262957 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:25:53.317640  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:53.361517  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:25:53.362854  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:25:53.489151  262957 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0816 22:25:53.499402  262957 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0816 22:25:53.507663  262957 docker.go:153] disabling docker service ...
	I0816 22:25:53.507710  262957 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:25:53.515840  262957 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:25:53.523795  262957 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:25:53.582896  262957 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:25:53.644285  262957 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:25:53.653611  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:25:53.665218  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0816 22:25:53.672674  262957 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0816 22:25:53.672699  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0816 22:25:53.680934  262957 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:25:53.686723  262957 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:25:53.686773  262957 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:25:53.693222  262957 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:25:53.698990  262957 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:25:53.756392  262957 ssh_runner.go:149] Run: sudo systemctl start crio
	I0816 22:25:53.765202  262957 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 22:25:53.765252  262957 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:25:53.768154  262957 start.go:413] Will wait 60s for crictl version
	I0816 22:25:53.768197  262957 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:25:53.794195  262957 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0816 22:25:53.794262  262957 ssh_runner.go:149] Run: crio --version
	I0816 22:25:53.852537  262957 ssh_runner.go:149] Run: crio --version
	I0816 22:25:53.912084  262957 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on CRI-O 1.20.3 ...
	I0816 22:25:53.912160  262957 cli_runner.go:115] Run: docker network inspect newest-cni-20210816222436-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 22:25:53.950048  262957 ssh_runner.go:149] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0816 22:25:53.953262  262957 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:25:53.963781  262957 out.go:177]   - kubelet.network-plugin=cni
	I0816 22:25:52.154162  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:54.650414  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:53.965324  262957 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0816 22:25:53.965406  262957 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0816 22:25:53.965459  262957 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:25:53.993612  262957 crio.go:424] all images are preloaded for cri-o runtime.
	I0816 22:25:53.993630  262957 crio.go:333] Images already preloaded, skipping extraction
	I0816 22:25:53.993667  262957 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:25:54.020097  262957 crio.go:424] all images are preloaded for cri-o runtime.
	I0816 22:25:54.020118  262957 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:25:54.020180  262957 ssh_runner.go:149] Run: crio config
	I0816 22:25:54.082979  262957 cni.go:93] Creating CNI manager for ""
	I0816 22:25:54.083003  262957 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:25:54.083013  262957 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0816 22:25:54.083024  262957 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20210816222436-6487 NodeName:newest-cni-20210816222436-6487 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:fa
lse] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:25:54.083168  262957 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "newest-cni-20210816222436-6487"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:25:54.083284  262957 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20210816222436-6487 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816222436-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0816 22:25:54.083346  262957 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0816 22:25:54.090012  262957 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:25:54.090068  262957 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:25:54.096369  262957 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (603 bytes)
	I0816 22:25:54.107861  262957 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0816 22:25:54.119303  262957 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I0816 22:25:54.130633  262957 ssh_runner.go:149] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0816 22:25:54.133217  262957 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:25:54.141396  262957 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487 for IP: 192.168.67.2
	I0816 22:25:54.141447  262957 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:25:54.141471  262957 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:25:54.141535  262957 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/client.key
	I0816 22:25:54.141563  262957 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/apiserver.key.c7fa3a9e
	I0816 22:25:54.141596  262957 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/proxy-client.key
	I0816 22:25:54.141717  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem (1338 bytes)
	W0816 22:25:54.141762  262957 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487_empty.pem, impossibly tiny 0 bytes
	I0816 22:25:54.141774  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 22:25:54.141803  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:25:54.141827  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:25:54.141848  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0816 22:25:54.141897  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:25:54.142744  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:25:54.158540  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 22:25:54.174181  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:25:54.190076  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 22:25:54.205410  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:25:54.220130  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 22:25:54.235298  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:25:54.251605  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 22:25:54.268123  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /usr/share/ca-certificates/64872.pem (1708 bytes)
	I0816 22:25:54.283499  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:25:54.298583  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem --> /usr/share/ca-certificates/6487.pem (1338 bytes)
	I0816 22:25:54.314024  262957 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:25:54.325021  262957 ssh_runner.go:149] Run: openssl version
	I0816 22:25:54.329401  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/64872.pem && ln -fs /usr/share/ca-certificates/64872.pem /etc/ssl/certs/64872.pem"
	I0816 22:25:54.335940  262957 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/64872.pem
	I0816 22:25:54.338596  262957 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:50 /usr/share/ca-certificates/64872.pem
	I0816 22:25:54.338638  262957 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/64872.pem
	I0816 22:25:54.342906  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/64872.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:25:54.348826  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:25:54.358858  262957 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:25:54.361641  262957 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:25:54.361673  262957 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:25:54.365977  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:25:54.372154  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6487.pem && ln -fs /usr/share/ca-certificates/6487.pem /etc/ssl/certs/6487.pem"
	I0816 22:25:54.378858  262957 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6487.pem
	I0816 22:25:54.381576  262957 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:50 /usr/share/ca-certificates/6487.pem
	I0816 22:25:54.381623  262957 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6487.pem
	I0816 22:25:54.386036  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6487.pem /etc/ssl/certs/51391683.0"
	I0816 22:25:54.391898  262957 kubeadm.go:390] StartCluster: {Name:newest-cni-20210816222436-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816222436-6487 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:
false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:25:54.392022  262957 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 22:25:54.392052  262957 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:25:54.414245  262957 cri.go:76] found id: ""
	I0816 22:25:54.414284  262957 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:25:54.420413  262957 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0816 22:25:54.420436  262957 kubeadm.go:600] restartCluster start
	I0816 22:25:54.420466  262957 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0816 22:25:54.426072  262957 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:54.426966  262957 kubeconfig.go:117] verify returned: extract IP: "newest-cni-20210816222436-6487" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:25:54.427382  262957 kubeconfig.go:128] "newest-cni-20210816222436-6487" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0816 22:25:54.428106  262957 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:25:54.430425  262957 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 22:25:54.436260  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:54.436301  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:54.447743  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:54.648124  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:54.648202  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:54.661570  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:54.848823  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:54.848884  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:54.862082  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.048130  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:55.048196  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:55.061645  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.247861  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:55.247956  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:55.262026  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.448347  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:55.448414  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:55.461467  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.648695  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:55.648774  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:55.661684  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.847947  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:55.848042  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:55.862542  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:56.048736  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:56.048800  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:56.061836  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:56.248110  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:56.248200  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:56.261360  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:56.448639  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:56.448705  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:56.461500  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:56.648623  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:56.648703  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:56.662181  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.995370  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:58.495829  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:56.651402  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:58.651440  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:56.848603  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:56.848665  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:56.861212  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.048524  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:57.048591  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:57.061580  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.248828  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:57.248911  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:57.261828  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.448121  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:57.448188  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:57.461171  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.461189  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:57.461225  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:57.512239  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.512268  262957 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0816 22:25:57.512276  262957 kubeadm.go:1032] stopping kube-system containers ...
	I0816 22:25:57.512288  262957 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 22:25:57.512336  262957 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:25:57.536298  262957 cri.go:76] found id: ""
	I0816 22:25:57.536370  262957 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0816 22:25:57.545155  262957 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:25:57.551792  262957 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5643 Aug 16 22:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug 16 22:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Aug 16 22:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 16 22:24 /etc/kubernetes/scheduler.conf
	
	I0816 22:25:57.551856  262957 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 22:25:57.558184  262957 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 22:25:57.564274  262957 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 22:25:57.570245  262957 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.570290  262957 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 22:25:57.576131  262957 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 22:25:57.582547  262957 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.582595  262957 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 22:25:57.588511  262957 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:25:57.594494  262957 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0816 22:25:57.594510  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:57.636811  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:58.457317  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:58.574142  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:58.631732  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:58.680441  262957 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:25:58.680500  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:59.195406  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:59.695787  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:00.195739  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:00.694833  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:01.194883  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:01.695030  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:00.496189  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:02.994975  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:01.151777  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:03.650774  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:02.195405  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:02.695613  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:03.195523  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:03.695735  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:04.195172  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:04.695313  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:05.194844  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:05.228254  262957 api_server.go:70] duration metric: took 6.54781428s to wait for apiserver process to appear ...
	I0816 22:26:05.228278  262957 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:26:05.228288  262957 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:26:04.995640  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:06.995858  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:08.521501  262957 api_server.go:265] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:26:08.521534  262957 api_server.go:101] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:26:09.022198  262957 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:26:09.028317  262957 api_server.go:265] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:26:09.028345  262957 api_server.go:101] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:26:09.521603  262957 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:26:09.526189  262957 api_server.go:265] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:26:09.526218  262957 api_server.go:101] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:26:10.021661  262957 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:26:10.027811  262957 api_server.go:265] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0816 22:26:10.035180  262957 api_server.go:139] control plane version: v1.22.0-rc.0
	I0816 22:26:10.035212  262957 api_server.go:129] duration metric: took 4.806927084s to wait for apiserver health ...
	I0816 22:26:10.035225  262957 cni.go:93] Creating CNI manager for ""
	I0816 22:26:10.035233  262957 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:26:09.964461  238595 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (30.898528802s)
	I0816 22:26:09.964528  238595 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:26:09.973997  238595 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 22:26:09.974062  238595 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:26:09.999861  238595 cri.go:76] found id: ""
	I0816 22:26:09.999951  238595 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:26:10.007018  238595 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0816 22:26:10.007067  238595 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:26:10.013415  238595 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:26:10.013459  238595 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0816 22:26:05.657515  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:08.152187  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:10.115193  262957 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 22:26:10.115266  262957 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0816 22:26:10.120908  262957 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl ...
	I0816 22:26:10.120933  262957 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0816 22:26:10.134935  262957 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 22:26:10.353050  262957 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:26:10.366285  262957 system_pods.go:59] 10 kube-system pods found
	I0816 22:26:10.366331  262957 system_pods.go:61] "coredns-78fcd69978-nqx44" [6fe4486f-609a-4711-8984-d211fafbc14a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:26:10.366343  262957 system_pods.go:61] "coredns-78fcd69978-sh8hf" [99ca4da4-63c0-4eb5-b1a9-824580994bf0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:26:10.366357  262957 system_pods.go:61] "etcd-newest-cni-20210816222436-6487" [3161f296-8eb0-48ae-8bbd-1a3104e0c5cc] Running
	I0816 22:26:10.366369  262957 system_pods.go:61] "kindnet-4wtm6" [f784c344-70ae-41f8-b749-4bd3d26179d1] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0816 22:26:10.366379  262957 system_pods.go:61] "kube-apiserver-newest-cni-20210816222436-6487" [b48447a9-bba3-4ef7-89ab-ee9a020ac10b] Running
	I0816 22:26:10.366393  262957 system_pods.go:61] "kube-controller-manager-newest-cni-20210816222436-6487" [0d5a549e-612d-463d-9383-0ee0d9dd2a5c] Running
	I0816 22:26:10.366402  262957 system_pods.go:61] "kube-proxy-242br" [91a06e4b-7a8f-4f7c-a698-3f40c4024f1f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 22:26:10.366411  262957 system_pods.go:61] "kube-scheduler-newest-cni-20210816222436-6487" [c46e8919-9bc3-4dbc-ba13-c0d0b8c2ee7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 22:26:10.366419  262957 system_pods.go:61] "metrics-server-7c784ccb57-j52xp" [8b98c23d-9fa2-44dd-b9af-b1bf3215cd88] Pending
	I0816 22:26:10.366427  262957 system_pods.go:61] "storage-provisioner" [a71ed147-1a32-4360-9bcc-722db25ff42e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:26:10.366434  262957 system_pods.go:74] duration metric: took 13.36244ms to wait for pod list to return data ...
	I0816 22:26:10.366443  262957 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:26:10.369938  262957 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:26:10.369965  262957 node_conditions.go:123] node cpu capacity is 8
	I0816 22:26:10.369980  262957 node_conditions.go:105] duration metric: took 3.531866ms to run NodePressure ...
	I0816 22:26:10.370000  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:26:10.628407  262957 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:26:10.646464  262957 ops.go:34] apiserver oom_adj: -16
	I0816 22:26:10.646488  262957 kubeadm.go:604] restartCluster took 16.226044614s
	I0816 22:26:10.646497  262957 kubeadm.go:392] StartCluster complete in 16.254606324s
	I0816 22:26:10.646519  262957 settings.go:142] acquiring lock: {Name:mk71c9e00e0a208f5191d6b85d29a074b46503a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:26:10.646648  262957 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:26:10.648250  262957 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:26:10.653233  262957 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20210816222436-6487" rescaled to 1
	I0816 22:26:10.653302  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:26:10.653319  262957 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:26:10.653298  262957 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0816 22:26:10.653366  262957 addons.go:59] Setting storage-provisioner=true in profile "newest-cni-20210816222436-6487"
	I0816 22:26:10.653381  262957 addons.go:135] Setting addon storage-provisioner=true in "newest-cni-20210816222436-6487"
	W0816 22:26:10.653387  262957 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:26:10.653413  262957 host.go:66] Checking if "newest-cni-20210816222436-6487" exists ...
	I0816 22:26:10.653421  262957 addons.go:59] Setting default-storageclass=true in profile "newest-cni-20210816222436-6487"
	I0816 22:26:10.653452  262957 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20210816222436-6487"
	I0816 22:26:10.653502  262957 config.go:177] Loaded profile config "newest-cni-20210816222436-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0816 22:26:10.653557  262957 addons.go:59] Setting metrics-server=true in profile "newest-cni-20210816222436-6487"
	I0816 22:26:10.653574  262957 addons.go:135] Setting addon metrics-server=true in "newest-cni-20210816222436-6487"
	W0816 22:26:10.653581  262957 addons.go:147] addon metrics-server should already be in state true
	I0816 22:26:10.653607  262957 host.go:66] Checking if "newest-cni-20210816222436-6487" exists ...
	I0816 22:26:10.653741  262957 addons.go:59] Setting dashboard=true in profile "newest-cni-20210816222436-6487"
	I0816 22:26:10.653767  262957 addons.go:135] Setting addon dashboard=true in "newest-cni-20210816222436-6487"
	W0816 22:26:10.653776  262957 addons.go:147] addon dashboard should already be in state true
	I0816 22:26:10.653788  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:10.653811  262957 host.go:66] Checking if "newest-cni-20210816222436-6487" exists ...
	I0816 22:26:10.653952  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:10.654110  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:10.655941  262957 out.go:177] * Verifying Kubernetes components...
	I0816 22:26:10.654275  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:10.656048  262957 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:26:10.718128  262957 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:26:10.718268  262957 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:26:10.718285  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:26:10.718346  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:26:10.724823  262957 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:26:10.728589  262957 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:26:10.728694  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:26:10.728708  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:26:10.728778  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:26:10.372762  238595 out.go:204]   - Generating certificates and keys ...
	I0816 22:26:10.735593  262957 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:26:10.735667  262957 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:26:10.735676  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:26:10.735731  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:26:10.736243  262957 addons.go:135] Setting addon default-storageclass=true in "newest-cni-20210816222436-6487"
	W0816 22:26:10.736275  262957 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:26:10.736305  262957 host.go:66] Checking if "newest-cni-20210816222436-6487" exists ...
	I0816 22:26:10.736853  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:10.773177  262957 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:26:10.773242  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:10.774623  262957 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0816 22:26:10.789622  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:26:10.795332  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:26:10.807716  262957 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:26:10.807742  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:26:10.807797  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:26:10.818897  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:26:10.828432  262957 api_server.go:70] duration metric: took 175.046767ms to wait for apiserver process to appear ...
	I0816 22:26:10.828463  262957 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:26:10.828475  262957 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:26:10.835641  262957 api_server.go:265] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0816 22:26:10.836517  262957 api_server.go:139] control plane version: v1.22.0-rc.0
	I0816 22:26:10.836536  262957 api_server.go:129] duration metric: took 8.066334ms to wait for apiserver health ...
	I0816 22:26:10.836544  262957 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:26:10.844801  262957 system_pods.go:59] 10 kube-system pods found
	I0816 22:26:10.844830  262957 system_pods.go:61] "coredns-78fcd69978-nqx44" [6fe4486f-609a-4711-8984-d211fafbc14a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:26:10.844841  262957 system_pods.go:61] "coredns-78fcd69978-sh8hf" [99ca4da4-63c0-4eb5-b1a9-824580994bf0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:26:10.844849  262957 system_pods.go:61] "etcd-newest-cni-20210816222436-6487" [3161f296-8eb0-48ae-8bbd-1a3104e0c5cc] Running
	I0816 22:26:10.844862  262957 system_pods.go:61] "kindnet-4wtm6" [f784c344-70ae-41f8-b749-4bd3d26179d1] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0816 22:26:10.844871  262957 system_pods.go:61] "kube-apiserver-newest-cni-20210816222436-6487" [b48447a9-bba3-4ef7-89ab-ee9a020ac10b] Running
	I0816 22:26:10.844881  262957 system_pods.go:61] "kube-controller-manager-newest-cni-20210816222436-6487" [0d5a549e-612d-463d-9383-0ee0d9dd2a5c] Running
	I0816 22:26:10.844892  262957 system_pods.go:61] "kube-proxy-242br" [91a06e4b-7a8f-4f7c-a698-3f40c4024f1f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 22:26:10.844903  262957 system_pods.go:61] "kube-scheduler-newest-cni-20210816222436-6487" [c46e8919-9bc3-4dbc-ba13-c0d0b8c2ee7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 22:26:10.844920  262957 system_pods.go:61] "metrics-server-7c784ccb57-j52xp" [8b98c23d-9fa2-44dd-b9af-b1bf3215cd88] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:26:10.844930  262957 system_pods.go:61] "storage-provisioner" [a71ed147-1a32-4360-9bcc-722db25ff42e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:26:10.844937  262957 system_pods.go:74] duration metric: took 8.387271ms to wait for pod list to return data ...
	I0816 22:26:10.844948  262957 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:26:10.847353  262957 default_sa.go:45] found service account: "default"
	I0816 22:26:10.847370  262957 default_sa.go:55] duration metric: took 2.413533ms for default service account to be created ...
	I0816 22:26:10.847380  262957 kubeadm.go:547] duration metric: took 194.000457ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0816 22:26:10.847401  262957 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:26:10.849463  262957 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:26:10.849480  262957 node_conditions.go:123] node cpu capacity is 8
	I0816 22:26:10.849495  262957 node_conditions.go:105] duration metric: took 2.085396ms to run NodePressure ...
	I0816 22:26:10.849509  262957 start.go:231] waiting for startup goroutines ...
	I0816 22:26:10.862435  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:26:10.919082  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:26:10.919107  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:26:10.928768  262957 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:26:10.928790  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:26:10.936226  262957 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:26:10.939559  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:26:10.939580  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:26:10.947378  262957 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:26:10.947440  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:26:10.956321  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:26:10.956344  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:26:10.959897  262957 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:26:11.016118  262957 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:26:11.016141  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:26:11.021575  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:26:11.021599  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:26:11.031871  262957 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:26:11.038943  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:26:11.038964  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:26:11.137497  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:26:11.137523  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:26:11.217513  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:26:11.217538  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:26:11.232958  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:26:11.232983  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:26:11.248587  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:26:11.248612  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:26:11.327831  262957 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:26:11.543579  262957 addons.go:313] Verifying addon metrics-server=true in "newest-cni-20210816222436-6487"
	I0816 22:26:11.719432  262957 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0816 22:26:11.719457  262957 addons.go:344] enableAddons completed in 1.066141103s
	I0816 22:26:11.764284  262957 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0816 22:26:11.765766  262957 out.go:177] 
	W0816 22:26:11.765889  262957 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0816 22:26:11.767364  262957 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0816 22:26:11.768745  262957 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210816222436-6487" cluster and "default" namespace by default
	I0816 22:26:10.885563  238595 out.go:204]   - Booting up control plane ...
	I0816 22:26:09.496573  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:11.995982  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:10.651451  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:13.151979  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:15.153271  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:13.996111  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:16.495078  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:18.495570  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:17.651242  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:20.151935  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:20.496216  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:22.996199  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:24.933950  238595 out.go:204]   - Configuring RBAC rules ...
	I0816 22:26:25.352976  238595 cni.go:93] Creating CNI manager for ""
	I0816 22:26:25.353002  238595 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:26:22.650222  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:23.146708  240293 pod_ready.go:81] duration metric: took 4m0.400635585s waiting for pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace to be "Ready" ...
	E0816 22:26:23.146730  240293 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 22:26:23.146749  240293 pod_ready.go:38] duration metric: took 4m42.319875628s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:26:23.146776  240293 kubeadm.go:604] restartCluster took 4m59.914882197s
	W0816 22:26:23.146936  240293 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0816 22:26:23.146993  240293 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 22:26:25.355246  238595 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 22:26:25.355311  238595 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0816 22:26:25.358718  238595 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0816 22:26:25.358738  238595 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0816 22:26:25.370945  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 22:26:25.621157  238595 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:26:25.621206  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:25.621226  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=default-k8s-different-port-20210816221939-6487 minikube.k8s.io/updated_at=2021_08_16T22_26_25_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:25.733924  238595 ops.go:34] apiserver oom_adj: -16
	I0816 22:26:25.733912  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:26.298743  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:26.798723  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:27.298752  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:24.996387  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:27.495135  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:27.798667  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:28.298823  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:28.798898  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:29.299125  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:29.798939  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:30.298461  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:30.799163  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:31.298377  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:31.798518  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:32.299080  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:29.495517  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:31.495703  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:33.496362  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:32.798224  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:33.298433  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:33.799075  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:34.298503  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:34.798223  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:35.299182  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:35.798578  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:36.298228  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:36.798801  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:37.299144  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:35.996187  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:38.495700  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:37.798260  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:38.298197  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:38.798424  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:38.917845  238595 kubeadm.go:985] duration metric: took 13.296684424s to wait for elevateKubeSystemPrivileges.
	I0816 22:26:38.917877  238595 kubeadm.go:392] StartCluster complete in 5m29.078278154s
	I0816 22:26:38.917895  238595 settings.go:142] acquiring lock: {Name:mk71c9e00e0a208f5191d6b85d29a074b46503a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:26:38.917976  238595 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:26:38.919347  238595 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:26:39.435280  238595 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20210816221939-6487" rescaled to 1
	I0816 22:26:39.435337  238595 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 22:26:39.436884  238595 out.go:177] * Verifying Kubernetes components...
	I0816 22:26:39.435381  238595 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:26:39.436944  238595 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:26:39.435407  238595 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:26:39.437054  238595 addons.go:59] Setting storage-provisioner=true in profile "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.437066  238595 addons.go:59] Setting dashboard=true in profile "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.437084  238595 addons.go:59] Setting default-storageclass=true in profile "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.437097  238595 addons.go:135] Setting addon dashboard=true in "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.437107  238595 addons.go:59] Setting metrics-server=true in profile "default-k8s-different-port-20210816221939-6487"
	W0816 22:26:39.437111  238595 addons.go:147] addon dashboard should already be in state true
	I0816 22:26:39.437119  238595 addons.go:135] Setting addon metrics-server=true in "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.435601  238595 config.go:177] Loaded profile config "default-k8s-different-port-20210816221939-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	W0816 22:26:39.437127  238595 addons.go:147] addon metrics-server should already be in state true
	I0816 22:26:39.437075  238595 addons.go:135] Setting addon storage-provisioner=true in "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.437147  238595 host.go:66] Checking if "default-k8s-different-port-20210816221939-6487" exists ...
	I0816 22:26:39.437156  238595 host.go:66] Checking if "default-k8s-different-port-20210816221939-6487" exists ...
	W0816 22:26:39.437157  238595 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:26:39.437098  238595 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.437219  238595 host.go:66] Checking if "default-k8s-different-port-20210816221939-6487" exists ...
	I0816 22:26:39.437580  238595 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210816221939-6487 --format={{.State.Status}}
	I0816 22:26:39.437673  238595 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210816221939-6487 --format={{.State.Status}}
	I0816 22:26:39.437680  238595 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210816221939-6487 --format={{.State.Status}}
	I0816 22:26:39.437786  238595 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210816221939-6487 --format={{.State.Status}}
	I0816 22:26:39.450925  238595 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20210816221939-6487" to be "Ready" ...
	I0816 22:26:39.454454  238595 node_ready.go:49] node "default-k8s-different-port-20210816221939-6487" has status "Ready":"True"
	I0816 22:26:39.454478  238595 node_ready.go:38] duration metric: took 3.504801ms waiting for node "default-k8s-different-port-20210816221939-6487" to be "Ready" ...
	I0816 22:26:39.454492  238595 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:26:39.461585  238595 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-n6ddn" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:39.496014  238595 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:26:39.496143  238595 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:26:39.496159  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:26:39.497741  238595 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:26:39.496222  238595 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210816221939-6487
	I0816 22:26:39.497808  238595 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:26:39.497821  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:26:39.497865  238595 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210816221939-6487
	I0816 22:26:39.499561  238595 addons.go:135] Setting addon default-storageclass=true in "default-k8s-different-port-20210816221939-6487"
	W0816 22:26:39.499598  238595 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:26:39.499623  238595 host.go:66] Checking if "default-k8s-different-port-20210816221939-6487" exists ...
	I0816 22:26:39.500057  238595 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210816221939-6487 --format={{.State.Status}}
	I0816 22:26:39.508968  238595 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:26:39.510786  238595 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:26:39.510877  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:26:39.510894  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:26:39.510963  238595 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210816221939-6487
	I0816 22:26:39.543137  238595 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:26:39.551327  238595 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:26:39.551354  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:26:39.551418  238595 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210816221939-6487
	I0816 22:26:39.562469  238595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816221939-6487/id_rsa Username:docker}
	I0816 22:26:39.567015  238595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816221939-6487/id_rsa Username:docker}
	I0816 22:26:39.585895  238595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816221939-6487/id_rsa Username:docker}
	I0816 22:26:39.601932  238595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816221939-6487/id_rsa Username:docker}
	I0816 22:26:39.730192  238595 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:26:39.730216  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:26:39.735004  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:26:39.735028  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:26:39.825712  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:26:39.825735  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:26:39.828025  238595 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:26:39.828046  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:26:39.829939  238595 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:26:39.830581  238595 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:26:39.917562  238595 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:26:39.917594  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:26:39.918416  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:26:39.918442  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:26:39.934239  238595 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:26:39.935303  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:26:39.935323  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:26:40.024142  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:26:40.024168  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:26:40.121870  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:26:40.121954  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:26:40.213303  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:26:40.213329  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:26:40.226600  238595 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0816 22:26:40.233649  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:26:40.233674  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:26:40.315993  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:26:40.316021  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:26:40.329860  238595 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:26:40.913110  238595 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.08249574s)
	I0816 22:26:41.119373  238595 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.185088873s)
	I0816 22:26:41.119413  238595 addons.go:313] Verifying addon metrics-server=true in "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:41.513353  238595 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.183438758s)
	I0816 22:26:41.515520  238595 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0816 22:26:41.515560  238595 addons.go:344] enableAddons completed in 2.080164328s
	I0816 22:26:41.516293  238595 pod_ready.go:102] pod "coredns-558bd4d5db-n6ddn" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:40.996044  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:42.996463  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:43.970224  238595 pod_ready.go:102] pod "coredns-558bd4d5db-n6ddn" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:45.016130  238595 pod_ready.go:92] pod "coredns-558bd4d5db-n6ddn" in "kube-system" namespace has status "Ready":"True"
	I0816 22:26:45.016153  238595 pod_ready.go:81] duration metric: took 5.554536838s waiting for pod "coredns-558bd4d5db-n6ddn" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.016169  238595 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.020503  238595 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:26:45.020523  238595 pod_ready.go:81] duration metric: took 4.344641ms waiting for pod "etcd-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.020537  238595 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.024738  238595 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:26:45.024753  238595 pod_ready.go:81] duration metric: took 4.208942ms waiting for pod "kube-apiserver-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.024762  238595 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.028646  238595 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:26:45.028661  238595 pod_ready.go:81] duration metric: took 3.89128ms waiting for pod "kube-controller-manager-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.028670  238595 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4pmgn" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.032791  238595 pod_ready.go:92] pod "kube-proxy-4pmgn" in "kube-system" namespace has status "Ready":"True"
	I0816 22:26:45.032812  238595 pod_ready.go:81] duration metric: took 4.13529ms waiting for pod "kube-proxy-4pmgn" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.032823  238595 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.369533  238595 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:26:45.369559  238595 pod_ready.go:81] duration metric: took 336.726404ms waiting for pod "kube-scheduler-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.369571  238595 pod_ready.go:38] duration metric: took 5.915063438s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:26:45.369595  238595 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:26:45.369645  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:45.395595  238595 api_server.go:70] duration metric: took 5.960222514s to wait for apiserver process to appear ...
	I0816 22:26:45.395625  238595 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:26:45.395637  238595 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0816 22:26:45.400217  238595 api_server.go:265] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0816 22:26:45.401067  238595 api_server.go:139] control plane version: v1.21.3
	I0816 22:26:45.401089  238595 api_server.go:129] duration metric: took 5.457124ms to wait for apiserver health ...
	I0816 22:26:45.401099  238595 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:26:45.570973  238595 system_pods.go:59] 9 kube-system pods found
	I0816 22:26:45.571001  238595 system_pods.go:61] "coredns-558bd4d5db-n6ddn" [6ea194b9-97b1-4f8b-b10a-e53c58d0b815] Running
	I0816 22:26:45.571006  238595 system_pods.go:61] "etcd-default-k8s-different-port-20210816221939-6487" [c8c2a683-c482-47ca-888f-c223fa5fc1a2] Running
	I0816 22:26:45.571016  238595 system_pods.go:61] "kindnet-5x8jh" [3ef671f3-edb1-4112-8340-37f83bc48660] Running
	I0816 22:26:45.571020  238595 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210816221939-6487" [25fe7b38-f39b-4718-956a-b47bd970723a] Running
	I0816 22:26:45.571025  238595 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210816221939-6487" [24ad0f39-a711-48df-9b8e-d9e0e08ee4f6] Running
	I0816 22:26:45.571028  238595 system_pods.go:61] "kube-proxy-4pmgn" [e049b17c-98b6-4692-8e5a-61c73abc97c2] Running
	I0816 22:26:45.571032  238595 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210816221939-6487" [2c0db59b-68b7-4831-be15-4f882953eec5] Running
	I0816 22:26:45.571039  238595 system_pods.go:61] "metrics-server-7c784ccb57-lfkmq" [d9309c70-8cf5-4fdc-a79f-1c85f9ceda55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:26:45.571069  238595 system_pods.go:61] "storage-provisioner" [f876c73e-e1dd-4f3c-9bad-866adba8a427] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:26:45.571074  238595 system_pods.go:74] duration metric: took 169.970426ms to wait for pod list to return data ...
	I0816 22:26:45.571085  238595 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:26:45.768620  238595 default_sa.go:45] found service account: "default"
	I0816 22:26:45.768644  238595 default_sa.go:55] duration metric: took 197.553773ms for default service account to be created ...
	I0816 22:26:45.768653  238595 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:26:45.970940  238595 system_pods.go:86] 9 kube-system pods found
	I0816 22:26:45.970973  238595 system_pods.go:89] "coredns-558bd4d5db-n6ddn" [6ea194b9-97b1-4f8b-b10a-e53c58d0b815] Running
	I0816 22:26:45.970982  238595 system_pods.go:89] "etcd-default-k8s-different-port-20210816221939-6487" [c8c2a683-c482-47ca-888f-c223fa5fc1a2] Running
	I0816 22:26:45.970987  238595 system_pods.go:89] "kindnet-5x8jh" [3ef671f3-edb1-4112-8340-37f83bc48660] Running
	I0816 22:26:45.970993  238595 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20210816221939-6487" [25fe7b38-f39b-4718-956a-b47bd970723a] Running
	I0816 22:26:45.971000  238595 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20210816221939-6487" [24ad0f39-a711-48df-9b8e-d9e0e08ee4f6] Running
	I0816 22:26:45.971006  238595 system_pods.go:89] "kube-proxy-4pmgn" [e049b17c-98b6-4692-8e5a-61c73abc97c2] Running
	I0816 22:26:45.971013  238595 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20210816221939-6487" [2c0db59b-68b7-4831-be15-4f882953eec5] Running
	I0816 22:26:45.971024  238595 system_pods.go:89] "metrics-server-7c784ccb57-lfkmq" [d9309c70-8cf5-4fdc-a79f-1c85f9ceda55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:26:45.971037  238595 system_pods.go:89] "storage-provisioner" [f876c73e-e1dd-4f3c-9bad-866adba8a427] Running
	I0816 22:26:45.971046  238595 system_pods.go:126] duration metric: took 202.387682ms to wait for k8s-apps to be running ...
	I0816 22:26:45.971061  238595 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:26:45.971104  238595 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:26:46.023089  238595 system_svc.go:56] duration metric: took 52.020591ms WaitForService to wait for kubelet.
	I0816 22:26:46.023116  238595 kubeadm.go:547] duration metric: took 6.587748491s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:26:46.023141  238595 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:26:46.168888  238595 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:26:46.168915  238595 node_conditions.go:123] node cpu capacity is 8
	I0816 22:26:46.168933  238595 node_conditions.go:105] duration metric: took 145.786239ms to run NodePressure ...
	I0816 22:26:46.168945  238595 start.go:231] waiting for startup goroutines ...
	I0816 22:26:46.211558  238595 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0816 22:26:46.214728  238595 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20210816221939-6487" cluster and "default" namespace by default
	I0816 22:26:45.495975  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:47.496653  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:49.995957  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:52.496048  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:54.204913  240293 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.057884699s)
	I0816 22:26:54.204974  240293 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:26:54.214048  240293 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 22:26:54.214110  240293 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:26:54.236967  240293 cri.go:76] found id: ""
	I0816 22:26:54.237019  240293 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:26:54.243553  240293 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0816 22:26:54.243606  240293 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:26:54.249971  240293 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:26:54.250416  240293 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0816 22:26:54.516364  240293 out.go:204]   - Generating certificates and keys ...
	I0816 22:26:55.249703  240293 out.go:204]   - Booting up control plane ...
	I0816 22:26:54.996103  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:57.495660  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:59.495743  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:01.995335  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:03.995379  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:05.995637  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:08.496092  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:09.298595  240293 out.go:204]   - Configuring RBAC rules ...
	I0816 22:27:09.713304  240293 cni.go:93] Creating CNI manager for ""
	I0816 22:27:09.713327  240293 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:27:09.715227  240293 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 22:27:09.715277  240293 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0816 22:27:09.718863  240293 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0816 22:27:09.718885  240293 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0816 22:27:09.731677  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 22:27:09.962283  240293 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:27:09.962350  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:09.962373  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=embed-certs-20210816221913-6487 minikube.k8s.io/updated_at=2021_08_16T22_27_09_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:10.060642  240293 ops.go:34] apiserver oom_adj: -16
	I0816 22:27:10.060723  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:10.995882  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:13.495974  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:10.633246  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:11.133139  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:11.633557  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:12.133518  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:12.633029  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:13.132949  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:13.632656  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:14.133534  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:14.632964  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:15.133130  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:15.496295  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:17.995970  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:15.632812  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:16.132692  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:16.633691  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:17.133141  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:17.632912  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:18.132865  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:18.633533  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:19.132892  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:19.632997  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:20.133122  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:20.496121  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:22.995237  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:20.633092  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:21.132697  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:21.632742  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:22.133291  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:22.632839  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:23.133425  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:23.632752  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:24.132877  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:24.198432  240293 kubeadm.go:985] duration metric: took 14.236137948s to wait for elevateKubeSystemPrivileges.
	I0816 22:27:24.198462  240293 kubeadm.go:392] StartCluster complete in 6m0.995598802s
	I0816 22:27:24.198481  240293 settings.go:142] acquiring lock: {Name:mk71c9e00e0a208f5191d6b85d29a074b46503a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:27:24.198572  240293 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:27:24.200345  240293 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:27:24.715145  240293 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20210816221913-6487" rescaled to 1
	I0816 22:27:24.715193  240293 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 22:27:24.717805  240293 out.go:177] * Verifying Kubernetes components...
	I0816 22:27:24.717866  240293 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:27:24.715250  240293 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:27:24.715269  240293 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:27:24.717969  240293 addons.go:59] Setting storage-provisioner=true in profile "embed-certs-20210816221913-6487"
	I0816 22:27:24.717988  240293 addons.go:135] Setting addon storage-provisioner=true in "embed-certs-20210816221913-6487"
	W0816 22:27:24.717999  240293 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:27:24.718001  240293 addons.go:59] Setting default-storageclass=true in profile "embed-certs-20210816221913-6487"
	I0816 22:27:24.718022  240293 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20210816221913-6487"
	I0816 22:27:24.718032  240293 host.go:66] Checking if "embed-certs-20210816221913-6487" exists ...
	I0816 22:27:24.718039  240293 addons.go:59] Setting metrics-server=true in profile "embed-certs-20210816221913-6487"
	I0816 22:27:24.718052  240293 addons.go:135] Setting addon metrics-server=true in "embed-certs-20210816221913-6487"
	I0816 22:27:24.717986  240293 addons.go:59] Setting dashboard=true in profile "embed-certs-20210816221913-6487"
	I0816 22:27:24.718085  240293 addons.go:135] Setting addon dashboard=true in "embed-certs-20210816221913-6487"
	W0816 22:27:24.718100  240293 addons.go:147] addon dashboard should already be in state true
	I0816 22:27:24.718131  240293 host.go:66] Checking if "embed-certs-20210816221913-6487" exists ...
	I0816 22:27:24.718343  240293 cli_runner.go:115] Run: docker container inspect embed-certs-20210816221913-6487 --format={{.State.Status}}
	I0816 22:27:24.715429  240293 config.go:177] Loaded profile config "embed-certs-20210816221913-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	W0816 22:27:24.718059  240293 addons.go:147] addon metrics-server should already be in state true
	I0816 22:27:24.718417  240293 host.go:66] Checking if "embed-certs-20210816221913-6487" exists ...
	I0816 22:27:24.718547  240293 cli_runner.go:115] Run: docker container inspect embed-certs-20210816221913-6487 --format={{.State.Status}}
	I0816 22:27:24.718594  240293 cli_runner.go:115] Run: docker container inspect embed-certs-20210816221913-6487 --format={{.State.Status}}
	I0816 22:27:24.718818  240293 cli_runner.go:115] Run: docker container inspect embed-certs-20210816221913-6487 --format={{.State.Status}}
	I0816 22:27:24.782293  240293 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:27:24.783873  240293 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:27:24.782196  240293 addons.go:135] Setting addon default-storageclass=true in "embed-certs-20210816221913-6487"
	W0816 22:27:24.783987  240293 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:27:24.784020  240293 host.go:66] Checking if "embed-certs-20210816221913-6487" exists ...
	I0816 22:27:24.784033  240293 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:27:24.784044  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:27:24.785627  240293 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:27:24.785699  240293 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:27:24.785710  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:27:24.784098  240293 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:27:24.785767  240293 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:27:24.784669  240293 cli_runner.go:115] Run: docker container inspect embed-certs-20210816221913-6487 --format={{.State.Status}}
	I0816 22:27:24.787448  240293 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:27:24.787521  240293 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:27:24.787537  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:27:24.787582  240293 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:27:24.844134  240293 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20210816221913-6487" to be "Ready" ...
	I0816 22:27:24.844870  240293 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:27:24.854809  240293 node_ready.go:49] node "embed-certs-20210816221913-6487" has status "Ready":"True"
	I0816 22:27:24.854830  240293 node_ready.go:38] duration metric: took 10.664038ms waiting for node "embed-certs-20210816221913-6487" to be "Ready" ...
	I0816 22:27:24.854841  240293 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:27:24.855545  240293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32959 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:27:24.861143  240293 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-4zdn7" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:24.861336  240293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32959 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:27:24.863265  240293 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:27:24.863285  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:27:24.863344  240293 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:27:24.865862  240293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32959 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:27:24.902450  240293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32959 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:27:25.213259  240293 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:27:25.213287  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:27:25.213568  240293 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:27:25.233517  240293 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:27:25.239365  240293 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:27:25.239389  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:27:25.313683  240293 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:27:25.313712  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:27:25.433541  240293 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:27:25.433568  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:27:25.434948  240293 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:27:25.434968  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:27:25.527034  240293 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:27:25.527059  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:27:25.613745  240293 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:27:25.613777  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:27:25.625813  240293 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:27:25.625851  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:27:25.713538  240293 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:27:25.726637  240293 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:27:25.726666  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:27:25.734858  240293 start.go:728] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0816 22:27:25.820941  240293 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:27:25.820971  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:27:25.840244  240293 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:27:25.840270  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:27:25.925179  240293 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:27:25.925202  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:27:26.021980  240293 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:27:26.324641  240293 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.111035481s)
	I0816 22:27:26.324667  240293 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.091124904s)
	I0816 22:27:26.939142  240293 pod_ready.go:102] pod "coredns-558bd4d5db-4zdn7" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:27.022283  240293 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.30869814s)
	I0816 22:27:27.022370  240293 addons.go:313] Verifying addon metrics-server=true in "embed-certs-20210816221913-6487"
	I0816 22:27:27.431601  240293 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.409553263s)
	I0816 22:27:24.996042  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:27.495421  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:27.433693  240293 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0816 22:27:27.433723  240293 addons.go:344] enableAddons completed in 2.718461512s
	I0816 22:27:29.427787  240293 pod_ready.go:102] pod "coredns-558bd4d5db-4zdn7" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:29.496073  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:31.995232  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:31.927352  240293 pod_ready.go:102] pod "coredns-558bd4d5db-4zdn7" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:32.427442  240293 pod_ready.go:92] pod "coredns-558bd4d5db-4zdn7" in "kube-system" namespace has status "Ready":"True"
	I0816 22:27:32.427460  240293 pod_ready.go:81] duration metric: took 7.566292628s waiting for pod "coredns-558bd4d5db-4zdn7" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:32.427472  240293 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-tc25b" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:34.437803  240293 pod_ready.go:102] pod "coredns-558bd4d5db-tc25b" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:34.934910  240293 pod_ready.go:97] error getting pod "coredns-558bd4d5db-tc25b" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-tc25b" not found
	I0816 22:27:34.934937  240293 pod_ready.go:81] duration metric: took 2.507455875s waiting for pod "coredns-558bd4d5db-tc25b" in "kube-system" namespace to be "Ready" ...
	E0816 22:27:34.934947  240293 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-tc25b" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-tc25b" not found
	I0816 22:27:34.934954  240293 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:34.938786  240293 pod_ready.go:92] pod "etcd-embed-certs-20210816221913-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:27:34.938802  240293 pod_ready.go:81] duration metric: took 3.83976ms waiting for pod "etcd-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:34.938813  240293 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:34.945030  240293 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210816221913-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:27:34.945045  240293 pod_ready.go:81] duration metric: took 6.225501ms waiting for pod "kube-apiserver-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:34.945054  240293 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:34.948474  240293 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210816221913-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:27:34.948489  240293 pod_ready.go:81] duration metric: took 3.428771ms waiting for pod "kube-controller-manager-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:34.948497  240293 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hdhfc" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:34.951783  240293 pod_ready.go:92] pod "kube-proxy-hdhfc" in "kube-system" namespace has status "Ready":"True"
	I0816 22:27:34.951796  240293 pod_ready.go:81] duration metric: took 3.294223ms waiting for pod "kube-proxy-hdhfc" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:34.951803  240293 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:35.136382  240293 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210816221913-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:27:35.136401  240293 pod_ready.go:81] duration metric: took 184.590897ms waiting for pod "kube-scheduler-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:35.136410  240293 pod_ready.go:38] duration metric: took 10.281557269s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:27:35.136426  240293 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:27:35.136458  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:27:35.159861  240293 api_server.go:70] duration metric: took 10.444645521s to wait for apiserver process to appear ...
	I0816 22:27:35.159888  240293 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:27:35.159899  240293 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0816 22:27:35.164341  240293 api_server.go:265] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0816 22:27:35.165220  240293 api_server.go:139] control plane version: v1.21.3
	I0816 22:27:35.165240  240293 api_server.go:129] duration metric: took 5.346619ms to wait for apiserver health ...
	I0816 22:27:35.165249  240293 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:27:35.339424  240293 system_pods.go:59] 9 kube-system pods found
	I0816 22:27:35.339458  240293 system_pods.go:61] "coredns-558bd4d5db-4zdn7" [2f84c841-a28b-41d0-b586-228464908707] Running
	I0816 22:27:35.339466  240293 system_pods.go:61] "etcd-embed-certs-20210816221913-6487" [a4640dda-3e6e-4007-a02c-4fe349e1157a] Running
	I0816 22:27:35.339472  240293 system_pods.go:61] "kindnet-7xdmw" [b333f4e6-e17c-4af3-96d9-00d5c0a566e2] Running
	I0816 22:27:35.339478  240293 system_pods.go:61] "kube-apiserver-embed-certs-20210816221913-6487" [3128b6ca-a978-4b60-b0af-573b750063c5] Running
	I0816 22:27:35.339485  240293 system_pods.go:61] "kube-controller-manager-embed-certs-20210816221913-6487" [ceb2b7da-4e1b-4cb9-a330-1d8e9ecc342f] Running
	I0816 22:27:35.339492  240293 system_pods.go:61] "kube-proxy-hdhfc" [785f8c4d-6231-44db-b97e-547d011c5c80] Running
	I0816 22:27:35.339497  240293 system_pods.go:61] "kube-scheduler-embed-certs-20210816221913-6487" [76384600-2c2f-4d18-b402-b66a7166b31d] Running
	I0816 22:27:35.339509  240293 system_pods.go:61] "metrics-server-7c784ccb57-jlfzn" [9f60dfdc-77e8-4afc-b707-d6f56ebe4cbd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:27:35.339527  240293 system_pods.go:61] "storage-provisioner" [58f573d9-42f2-462f-bb9b-966bd46af856] Running
	I0816 22:27:35.339535  240293 system_pods.go:74] duration metric: took 174.279391ms to wait for pod list to return data ...
	I0816 22:27:35.339548  240293 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:27:35.536578  240293 default_sa.go:45] found service account: "default"
	I0816 22:27:35.536602  240293 default_sa.go:55] duration metric: took 197.045764ms for default service account to be created ...
	I0816 22:27:35.536610  240293 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:27:35.738632  240293 system_pods.go:86] 9 kube-system pods found
	I0816 22:27:35.738661  240293 system_pods.go:89] "coredns-558bd4d5db-4zdn7" [2f84c841-a28b-41d0-b586-228464908707] Running
	I0816 22:27:35.738666  240293 system_pods.go:89] "etcd-embed-certs-20210816221913-6487" [a4640dda-3e6e-4007-a02c-4fe349e1157a] Running
	I0816 22:27:35.738671  240293 system_pods.go:89] "kindnet-7xdmw" [b333f4e6-e17c-4af3-96d9-00d5c0a566e2] Running
	I0816 22:27:35.738675  240293 system_pods.go:89] "kube-apiserver-embed-certs-20210816221913-6487" [3128b6ca-a978-4b60-b0af-573b750063c5] Running
	I0816 22:27:35.738681  240293 system_pods.go:89] "kube-controller-manager-embed-certs-20210816221913-6487" [ceb2b7da-4e1b-4cb9-a330-1d8e9ecc342f] Running
	I0816 22:27:35.738685  240293 system_pods.go:89] "kube-proxy-hdhfc" [785f8c4d-6231-44db-b97e-547d011c5c80] Running
	I0816 22:27:35.738689  240293 system_pods.go:89] "kube-scheduler-embed-certs-20210816221913-6487" [76384600-2c2f-4d18-b402-b66a7166b31d] Running
	I0816 22:27:35.738695  240293 system_pods.go:89] "metrics-server-7c784ccb57-jlfzn" [9f60dfdc-77e8-4afc-b707-d6f56ebe4cbd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:27:35.738700  240293 system_pods.go:89] "storage-provisioner" [58f573d9-42f2-462f-bb9b-966bd46af856] Running
	I0816 22:27:35.738707  240293 system_pods.go:126] duration metric: took 202.09278ms to wait for k8s-apps to be running ...
	I0816 22:27:35.738724  240293 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:27:35.738761  240293 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:27:35.748257  240293 system_svc.go:56] duration metric: took 9.52848ms WaitForService to wait for kubelet.
	I0816 22:27:35.748278  240293 kubeadm.go:547] duration metric: took 11.033066699s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:27:35.748301  240293 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:27:35.936039  240293 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:27:35.936064  240293 node_conditions.go:123] node cpu capacity is 8
	I0816 22:27:35.936078  240293 node_conditions.go:105] duration metric: took 187.771781ms to run NodePressure ...
	I0816 22:27:35.936087  240293 start.go:231] waiting for startup goroutines ...
	I0816 22:27:35.979326  240293 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0816 22:27:35.981602  240293 out.go:177] * Done! kubectl is now configured to use "embed-certs-20210816221913-6487" cluster and "default" namespace by default
	I0816 22:27:34.495967  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:36.995351  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:38.995682  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:41.495818  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:43.496112  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:45.995716  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:47.995858  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Mon 2021-08-16 22:21:17 UTC, end at Mon 2021-08-16 22:27:49 UTC. --
	Aug 16 22:27:28 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:28.570944043Z" level=info msg="Created container 47dc2b7452e70a03e9169b7a6aa763bf2f79612d522d7f9b27b49e095ea2773d: kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-587mw/kubernetes-dashboard" id=fce43c7d-36b3-4916-b344-1a4458be3b2f name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:27:28 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:28.571447186Z" level=info msg="Starting container: 47dc2b7452e70a03e9169b7a6aa763bf2f79612d522d7f9b27b49e095ea2773d" id=bc4d6c50-809a-450e-9a75-e00a5825de56 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:27:28 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:28.581129812Z" level=info msg="Started container 47dc2b7452e70a03e9169b7a6aa763bf2f79612d522d7f9b27b49e095ea2773d: kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-587mw/kubernetes-dashboard" id=bc4d6c50-809a-450e-9a75-e00a5825de56 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:27:28 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:28.593389521Z" level=info msg="Trying to access \"k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:27:33 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:33.431278204Z" level=info msg="Pulled image: k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" id=083bf632-adc2-4441-9c97-3a905ce720d9 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 16 22:27:33 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:33.432153626Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=43a65ed3-d63d-45b0-ba7c-fd264e2a88e0 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:27:33 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:33.433499458Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=43a65ed3-d63d-45b0-ba7c-fd264e2a88e0 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:27:33 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:33.434336523Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cghgx/dashboard-metrics-scraper" id=0c07bbc6-68d6-4464-b6c3-dc71c8d5d1c5 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:27:33 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:33.612486634Z" level=info msg="Created container 45c7134293f7e1696f2d98876ed3bfb3d1a7d6c835d0f4d63ea96a88833feb97: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cghgx/dashboard-metrics-scraper" id=0c07bbc6-68d6-4464-b6c3-dc71c8d5d1c5 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:27:33 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:33.613000475Z" level=info msg="Starting container: 45c7134293f7e1696f2d98876ed3bfb3d1a7d6c835d0f4d63ea96a88833feb97" id=8f271b29-bac0-4039-9ff8-baec7e77f8f1 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:27:33 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:33.636655040Z" level=info msg="Started container 45c7134293f7e1696f2d98876ed3bfb3d1a7d6c835d0f4d63ea96a88833feb97: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cghgx/dashboard-metrics-scraper" id=8f271b29-bac0-4039-9ff8-baec7e77f8f1 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:27:34 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:34.017783534Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=a64dcacc-ee66-48d0-9160-efb6ee96637d name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:27:34 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:34.019296627Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=a64dcacc-ee66-48d0-9160-efb6ee96637d name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:27:34 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:34.019935606Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=d3724e68-1a70-4708-bf20-d8860457212d name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:27:34 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:34.021647989Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=d3724e68-1a70-4708-bf20-d8860457212d name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:27:34 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:34.022458450Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cghgx/dashboard-metrics-scraper" id=620b16c7-b8c5-4bd1-bc7d-febb0e7c5c66 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:27:34 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:34.180481781Z" level=info msg="Created container 9e1ff151b67d583fb87bc64bdf44294c0a67ed2e20cf36a0baf2edc7843918cf: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cghgx/dashboard-metrics-scraper" id=620b16c7-b8c5-4bd1-bc7d-febb0e7c5c66 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:27:34 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:34.181048769Z" level=info msg="Starting container: 9e1ff151b67d583fb87bc64bdf44294c0a67ed2e20cf36a0baf2edc7843918cf" id=bd181b1f-e40b-4f9a-8215-db250357d236 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:27:34 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:34.205931119Z" level=info msg="Started container 9e1ff151b67d583fb87bc64bdf44294c0a67ed2e20cf36a0baf2edc7843918cf: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cghgx/dashboard-metrics-scraper" id=bd181b1f-e40b-4f9a-8215-db250357d236 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:27:35 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:35.021451543Z" level=info msg="Removing container: 45c7134293f7e1696f2d98876ed3bfb3d1a7d6c835d0f4d63ea96a88833feb97" id=e3476846-381d-4416-a663-e4ff0479657e name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 16 22:27:35 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:35.056285144Z" level=info msg="Removed container 45c7134293f7e1696f2d98876ed3bfb3d1a7d6c835d0f4d63ea96a88833feb97: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cghgx/dashboard-metrics-scraper" id=e3476846-381d-4416-a663-e4ff0479657e name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 16 22:27:40 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:40.908749876Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=ee12d614-becf-44f2-be7c-0aead71003c5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:27:40 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:40.909043324Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=ee12d614-becf-44f2-be7c-0aead71003c5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:27:40 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:40.909418865Z" level=info msg="Pulling image: fake.domain/k8s.gcr.io/echoserver:1.4" id=f3218cd1-953a-4ff5-b0f5-c6009329d052 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 16 22:27:40 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:40.927861801Z" level=info msg="Trying to access \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID
	9e1ff151b67d5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   15 seconds ago      Exited              dashboard-metrics-scraper   1                   a672a2744ad07
	47dc2b7452e70       9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db   20 seconds ago      Running             kubernetes-dashboard        0                   21bccc03a7df9
	debf85165af74       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   21 seconds ago      Running             storage-provisioner         0                   3c01981e9f768
	0a0ea4978ab6c       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   23 seconds ago      Running             coredns                     0                   7975dbc93e3d2
	f33617d33e584       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb   23 seconds ago      Running             kindnet-cni                 0                   1796b1138290b
	0aa147fb51000       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   24 seconds ago      Running             kube-proxy                  0                   a8f8e3fd73e89
	8bf1c59231af0       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   47 seconds ago      Running             etcd                        0                   90aaa0ae8b4e7
	aa576929be7b6       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   47 seconds ago      Running             kube-controller-manager     0                   babc83d4e2713
	ec5f895255549       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   47 seconds ago      Running             kube-scheduler              0                   1e423b06a707c
	34a8effb725d1       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   47 seconds ago      Running             kube-apiserver              0                   10cce228f146c
	
	* 
	* ==> coredns [0a0ea4978ab6cd1089b9d06f0a278a1b0505d7d08360f002b22ad418383c54c6] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +0.895921] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000003] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +1.763890] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000003] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +0.831977] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-6a14296a1513
	[  +0.000003] ll header: 00000000: 02 42 33 e7 ff 29 02 42 c0 a8 31 02 08 00        .B3..).B..1...
	[  +2.811776] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000002] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +2.832077] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000003] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +4.335384] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000003] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[Aug16 22:27] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000003] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[ +13.663740] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth55ef9b3c
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 06 37 a8 8c 4d 9e 08 06        .......7..M...
	[  +2.163880] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethb864e10f
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 26 f5 50 a9 1a cc 08 06        ......&.P.....
	[  +0.707561] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth9c8775f6
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 4a 1b 78 c1 d0 58 08 06        ......J.x..X..
	[  +0.000675] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth6f717d76
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff aa 3a da 18 32 b9 08 06        .......:..2...
	[ +12.646052] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000002] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	
	* 
	* ==> etcd [8bf1c59231af08989a5db07139984cfdf2c0c9cf6fdc1d6e5bbf3f4a03bc5362] <==
	* raft2021/08/16 22:27:02 INFO: ea7e25599daad906 switched to configuration voters=(16896983918768216326)
	2021-08-16 22:27:02.329057 I | etcdserver/membership: added member ea7e25599daad906 [https://192.168.76.2:2380] to cluster 6f20f2c4b2fb5f8a
	2021-08-16 22:27:02.329658 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-16 22:27:02.329740 I | embed: listening for peers on 192.168.76.2:2380
	2021-08-16 22:27:02.329828 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/16 22:27:03 INFO: ea7e25599daad906 is starting a new election at term 1
	raft2021/08/16 22:27:03 INFO: ea7e25599daad906 became candidate at term 2
	raft2021/08/16 22:27:03 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2021/08/16 22:27:03 INFO: ea7e25599daad906 became leader at term 2
	raft2021/08/16 22:27:03 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2021-08-16 22:27:03.020939 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-16 22:27:03.021964 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-16 22:27:03.022035 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-16 22:27:03.022061 I | embed: ready to serve client requests
	2021-08-16 22:27:03.022078 I | etcdserver: published {Name:embed-certs-20210816221913-6487 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2021-08-16 22:27:03.022088 I | embed: ready to serve client requests
	2021-08-16 22:27:03.023447 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-16 22:27:03.023597 I | embed: serving client requests on 192.168.76.2:2379
	2021-08-16 22:27:16.361700 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (165.201581ms) to execute
	2021-08-16 22:27:16.361785 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-embed-certs-20210816221913-6487\" " with result "range_response_count:1 size:4006" took too long (194.651986ms) to execute
	2021-08-16 22:27:21.294639 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/deployment-controller\" " with result "range_response_count:0 size:5" took too long (114.403073ms) to execute
	2021-08-16 22:27:23.128156 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:27:26.372537 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:27:36.372225 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:27:46.371531 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  22:27:59 up  1:07,  0 users,  load average: 1.74, 2.41, 2.26
	Linux embed-certs-20210816221913-6487 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [34a8effb725d15ebb6c34b6f90d57dbc544e5ec2f8403d07ec1e1f6196fc373a] <==
	* I0816 22:27:06.713712       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0816 22:27:06.713814       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0816 22:27:07.531605       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0816 22:27:07.531627       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0816 22:27:07.536620       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0816 22:27:07.540238       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0816 22:27:07.540253       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0816 22:27:07.877048       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 22:27:07.929274       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0816 22:27:08.044065       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0816 22:27:08.045019       1 controller.go:611] quota admission added evaluator for: endpoints
	I0816 22:27:08.049101       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0816 22:27:09.115449       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0816 22:27:09.478093       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0816 22:27:09.535096       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0816 22:27:14.878779       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 22:27:23.534726       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0816 22:27:23.884521       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	W0816 22:27:29.020665       1 handler_proxy.go:102] no RequestInfo found in the context
	E0816 22:27:29.020766       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0816 22:27:29.020781       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 22:27:41.898139       1 client.go:360] parsed scheme: "passthrough"
	I0816 22:27:41.898177       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 22:27:41.898185       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [aa576929be7b6ab6d22ef3cb64aa2fa59c7f5ca84a125be69a9a4b61bf1a0ef7] <==
	* I0816 22:27:23.942787       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-4zdn7"
	I0816 22:27:24.216892       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0816 22:27:24.226769       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-tc25b"
	I0816 22:27:26.444068       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-7c784ccb57 to 1"
	I0816 22:27:26.522098       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0816 22:27:26.536428       1 replica_set.go:532] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0816 22:27:26.815020       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-jlfzn"
	I0816 22:27:27.018653       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0816 22:27:27.030965       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	I0816 22:27:27.033872       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:27:27.039891       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:27:27.041152       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:27:27.114622       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:27:27.115024       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:27:27.118457       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:27:27.125230       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:27:27.125282       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:27:27.125548       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:27:27.125552       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:27:27.129782       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:27:27.129836       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:27:27.133138       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:27:27.133208       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:27:27.214283       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-587mw"
	I0816 22:27:27.217209       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-cghgx"
	
	* 
	* ==> kube-proxy [0aa147fb5100088ef2b57609d0f2fcf92c5d3be6e41da177a1be6f4523451318] <==
	* I0816 22:27:25.015674       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0816 22:27:25.015729       1 server_others.go:140] Detected node IP 192.168.76.2
	W0816 22:27:25.015775       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0816 22:27:25.046502       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0816 22:27:25.046537       1 server_others.go:212] Using iptables Proxier.
	I0816 22:27:25.046547       1 server_others.go:219] creating dualStackProxier for iptables.
	W0816 22:27:25.046558       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0816 22:27:25.046841       1 server.go:643] Version: v1.21.3
	I0816 22:27:25.047971       1 config.go:315] Starting service config controller
	I0816 22:27:25.047998       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:27:25.048526       1 config.go:224] Starting endpoint slice config controller
	I0816 22:27:25.048540       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0816 22:27:25.113069       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0816 22:27:25.120869       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:27:25.212551       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0816 22:27:25.212599       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [ec5f895255549ca13063d25ca771f194444cccce85d22ec196e923f9c0520e16] <==
	* W0816 22:27:06.554631       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0816 22:27:06.554670       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 22:27:06.554690       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 22:27:06.554697       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 22:27:06.634790       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0816 22:27:06.635953       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 22:27:06.635978       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 22:27:06.636001       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0816 22:27:06.640597       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:27:06.640768       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 22:27:06.640886       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:27:06.640960       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:27:06.641026       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:27:06.644116       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:27:06.644186       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:27:06.644237       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:27:06.644277       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:27:06.644322       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:27:06.644357       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:27:06.644393       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:27:06.644437       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:27:06.714300       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:27:07.648700       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:27:07.668902       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0816 22:27:09.336894       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:21:17 UTC, end at Mon 2021-08-16 22:27:59 UTC. --
	Aug 16 22:27:27 embed-certs-20210816221913-6487 kubelet[5721]: I0816 22:27:27.332664    5721 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9cda492c-3ff6-4ef4-88a8-903a49b615b3-tmp-volume\") pod \"dashboard-metrics-scraper-8685c45546-cghgx\" (UID: \"9cda492c-3ff6-4ef4-88a8-903a49b615b3\") "
	Aug 16 22:27:27 embed-certs-20210816221913-6487 kubelet[5721]: E0816 22:27:27.831191    5721 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:27:27 embed-certs-20210816221913-6487 kubelet[5721]: E0816 22:27:27.831245    5721 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:27:27 embed-certs-20210816221913-6487 kubelet[5721]: E0816 22:27:27.831383    5721 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-dt8hf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler
{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]Vo
lumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-jlfzn_kube-system(9f60dfdc-77e8-4afc-b707-d6f56ebe4cbd): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 16 22:27:27 embed-certs-20210816221913-6487 kubelet[5721]: E0816 22:27:27.831435    5721 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-jlfzn" podUID=9f60dfdc-77e8-4afc-b707-d6f56ebe4cbd
	Aug 16 22:27:27 embed-certs-20210816221913-6487 kubelet[5721]: E0816 22:27:27.945791    5721 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-jlfzn" podUID=9f60dfdc-77e8-4afc-b707-d6f56ebe4cbd
	Aug 16 22:27:27 embed-certs-20210816221913-6487 kubelet[5721]: I0816 22:27:27.945988    5721 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	Aug 16 22:27:34 embed-certs-20210816221913-6487 kubelet[5721]: I0816 22:27:34.017301    5721 scope.go:111] "RemoveContainer" containerID="45c7134293f7e1696f2d98876ed3bfb3d1a7d6c835d0f4d63ea96a88833feb97"
	Aug 16 22:27:35 embed-certs-20210816221913-6487 kubelet[5721]: I0816 22:27:35.020455    5721 scope.go:111] "RemoveContainer" containerID="45c7134293f7e1696f2d98876ed3bfb3d1a7d6c835d0f4d63ea96a88833feb97"
	Aug 16 22:27:35 embed-certs-20210816221913-6487 kubelet[5721]: I0816 22:27:35.020596    5721 scope.go:111] "RemoveContainer" containerID="9e1ff151b67d583fb87bc64bdf44294c0a67ed2e20cf36a0baf2edc7843918cf"
	Aug 16 22:27:35 embed-certs-20210816221913-6487 kubelet[5721]: E0816 22:27:35.020942    5721 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-cghgx_kubernetes-dashboard(9cda492c-3ff6-4ef4-88a8-903a49b615b3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cghgx" podUID=9cda492c-3ff6-4ef4-88a8-903a49b615b3
	Aug 16 22:27:35 embed-certs-20210816221913-6487 kubelet[5721]: E0816 22:27:35.256190    5721 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/4e30df1bcd775444ec06b0393f66bea19168ae6b47cc3b771a3a176b222b381f/docker/4e30df1bcd775444ec06b0393f66bea19168ae6b47cc3b771a3a176b222b381f\": RecentStats: unable to find data in memory cache]"
	Aug 16 22:27:36 embed-certs-20210816221913-6487 kubelet[5721]: I0816 22:27:36.023821    5721 scope.go:111] "RemoveContainer" containerID="9e1ff151b67d583fb87bc64bdf44294c0a67ed2e20cf36a0baf2edc7843918cf"
	Aug 16 22:27:36 embed-certs-20210816221913-6487 kubelet[5721]: E0816 22:27:36.024284    5721 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-cghgx_kubernetes-dashboard(9cda492c-3ff6-4ef4-88a8-903a49b615b3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cghgx" podUID=9cda492c-3ff6-4ef4-88a8-903a49b615b3
	Aug 16 22:27:37 embed-certs-20210816221913-6487 kubelet[5721]: I0816 22:27:37.233453    5721 scope.go:111] "RemoveContainer" containerID="9e1ff151b67d583fb87bc64bdf44294c0a67ed2e20cf36a0baf2edc7843918cf"
	Aug 16 22:27:37 embed-certs-20210816221913-6487 kubelet[5721]: E0816 22:27:37.233743    5721 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-cghgx_kubernetes-dashboard(9cda492c-3ff6-4ef4-88a8-903a49b615b3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cghgx" podUID=9cda492c-3ff6-4ef4-88a8-903a49b615b3
	Aug 16 22:27:40 embed-certs-20210816221913-6487 kubelet[5721]: E0816 22:27:40.932349    5721 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:27:40 embed-certs-20210816221913-6487 kubelet[5721]: E0816 22:27:40.932387    5721 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:27:40 embed-certs-20210816221913-6487 kubelet[5721]: E0816 22:27:40.932498    5721 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-dt8hf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler
{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]Vo
lumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-jlfzn_kube-system(9f60dfdc-77e8-4afc-b707-d6f56ebe4cbd): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 16 22:27:40 embed-certs-20210816221913-6487 kubelet[5721]: E0816 22:27:40.932530    5721 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-jlfzn" podUID=9f60dfdc-77e8-4afc-b707-d6f56ebe4cbd
	Aug 16 22:27:45 embed-certs-20210816221913-6487 kubelet[5721]: E0816 22:27:45.355097    5721 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/4e30df1bcd775444ec06b0393f66bea19168ae6b47cc3b771a3a176b222b381f/docker/4e30df1bcd775444ec06b0393f66bea19168ae6b47cc3b771a3a176b222b381f\": RecentStats: unable to find data in memory cache]"
	Aug 16 22:27:47 embed-certs-20210816221913-6487 kubelet[5721]: I0816 22:27:47.080119    5721 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 16 22:27:47 embed-certs-20210816221913-6487 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 16 22:27:47 embed-certs-20210816221913-6487 systemd[1]: kubelet.service: Succeeded.
	Aug 16 22:27:47 embed-certs-20210816221913-6487 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [47dc2b7452e70a03e9169b7a6aa763bf2f79612d522d7f9b27b49e095ea2773d] <==
	* 2021/08/16 22:27:28 Starting overwatch
	2021/08/16 22:27:28 Using namespace: kubernetes-dashboard
	2021/08/16 22:27:28 Using in-cluster config to connect to apiserver
	2021/08/16 22:27:28 Using secret token for csrf signing
	2021/08/16 22:27:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/16 22:27:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/16 22:27:28 Successful initial request to the apiserver, version: v1.21.3
	2021/08/16 22:27:28 Generating JWE encryption key
	2021/08/16 22:27:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/16 22:27:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/16 22:27:28 Initializing JWE encryption key from synchronized object
	2021/08/16 22:27:28 Creating in-cluster Sidecar client
	2021/08/16 22:27:28 Serving insecurely on HTTP port: 9090
	2021/08/16 22:27:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [debf85165af7410de1807f28a939ac729e5268c28dd7659f8c7023882c3ca649] <==
	* I0816 22:27:27.734809       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 22:27:27.743272       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 22:27:27.743318       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 22:27:27.818233       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 22:27:27.818442       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20210816221913-6487_fa1149b0-842e-4d90-a8a3-d7a4493c46c3!
	I0816 22:27:27.819661       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0e0300ff-9302-4840-9a3b-45f6c59cb614", APIVersion:"v1", ResourceVersion:"572", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20210816221913-6487_fa1149b0-842e-4d90-a8a3-d7a4493c46c3 became leader
	I0816 22:27:27.919477       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20210816221913-6487_fa1149b0-842e-4d90-a8a3-d7a4493c46c3!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:27:59.596556  276698 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect embed-certs-20210816221913-6487
helpers_test.go:236: (dbg) docker inspect embed-certs-20210816221913-6487:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4e30df1bcd775444ec06b0393f66bea19168ae6b47cc3b771a3a176b222b381f",
	        "Created": "2021-08-16T22:19:14.835612448Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 240568,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-16T22:21:17.267754297Z",
	            "FinishedAt": "2021-08-16T22:21:14.948914507Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/4e30df1bcd775444ec06b0393f66bea19168ae6b47cc3b771a3a176b222b381f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4e30df1bcd775444ec06b0393f66bea19168ae6b47cc3b771a3a176b222b381f/hostname",
	        "HostsPath": "/var/lib/docker/containers/4e30df1bcd775444ec06b0393f66bea19168ae6b47cc3b771a3a176b222b381f/hosts",
	        "LogPath": "/var/lib/docker/containers/4e30df1bcd775444ec06b0393f66bea19168ae6b47cc3b771a3a176b222b381f/4e30df1bcd775444ec06b0393f66bea19168ae6b47cc3b771a3a176b222b381f-json.log",
	        "Name": "/embed-certs-20210816221913-6487",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20210816221913-6487:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20210816221913-6487",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2f22434b154c126cc38d201a479347972bc497d40dabe8fdb45932c210d3a268-init/diff:/var/lib/docker/overlay2/26ccb05b45c39eb8fb9a222c35182ae0b2281818311893cc1c820d93f21711ae/diff:/var/lib/docker/overlay2/cd717b1332cc33945d2b9d777346faa34d6bdfa47023daf36a4373532eccf421/diff:/var/lib/docker/overlay2/d9b45e6cc99a52e8f81dfe312587e9103ceac6c138634c90ec0d647cd90dcdc6/diff:/var/lib/docker/overlay2/253a269adcaa7871d92fb351ceb4d32421104cd62f741ba1c63d8e3f2e19a0b5/diff:/var/lib/docker/overlay2/42adedc2a706910fdba591e62cbfab57dd03d85b882f6eef38c3e131a0c52bea/diff:/var/lib/docker/overlay2/77ed2008192a16e91da5c6b9ccc6e71cea338a7cb24d331dc695e5f74fb04573/diff:/var/lib/docker/overlay2/29dccf868a296ce0889c929159fd3ac0dcdc00ba6e2ef864b056335954ffab4d/diff:/var/lib/docker/overlay2/592818f0a9ab8c99c2438562d9c5a479083cb08f41ad30ddefa9b493a8098150/diff:/var/lib/docker/overlay2/05fa44df096fbdb60f935bef69e952369e10cd8f1adc570c170c1f08d09743f7/diff:/var/lib/docker/overlay2/65cf48
1c471f231a663edc6331272317a851b0fd3395b2d6be43899f9b5443f9/diff:/var/lib/docker/overlay2/86a49202a7a2abea1a7a57c52f2e19cf991e9319b11dcf7f4105892413e0096e/diff:/var/lib/docker/overlay2/6f1a2a1805e90af512eef163f484935d57b0a0a72a65b5775c38efb5fe937d13/diff:/var/lib/docker/overlay2/f514eb992d78367550d2ce1c020771f984eff047302d6af3dcd71172270c0087/diff:/var/lib/docker/overlay2/cdae383a0302a2ed835985028caad813d6e16c7caf0e146005b47a4a1e94369b/diff:/var/lib/docker/overlay2/1fe02987c4f18db439e026431c44d1b8b233da766c4dcd331a4c20fd91e88748/diff:/var/lib/docker/overlay2/28bb6b828668bb682f3523f9f71fbd429ef6180cd98455625ea014ef4e532b77/diff:/var/lib/docker/overlay2/2878ae01f9dccb219097c2fc5873be677fe8a70e2bd936d25f018483a3d6c1a3/diff:/var/lib/docker/overlay2/da86efa9bad2d8b46c88440b1c4864add391f7cc9ee07d4227dd37d76c9ffd85/diff:/var/lib/docker/overlay2/93bf17ee1f2be70c70aa38b3aea9daa3a62e189181d1aa56acc52f5cf90f7436/diff:/var/lib/docker/overlay2/e8db603cd7ef789cc310bf7782375ee624d80ac0bceb3a3075900e67540c152a/diff:/var/lib/d
ocker/overlay2/1ba1c50d72480f49b835b34db6c3b21bccdf6530a8201631099a7e287dbf94a6/diff:/var/lib/docker/overlay2/2eeb0aecf508bc4a797d240b330bfa5b54b84a085737a96dad9faf9ea129d4ce/diff:/var/lib/docker/overlay2/40ba0dc970df76cce520350594f86384e121b47a190106559752e8f7cf138540/diff:/var/lib/docker/overlay2/8b80e2d0e53fb136919fd9ca7d4d198a92c324a19aa1827fcd672a3aa49a0dcc/diff:/var/lib/docker/overlay2/bd1d10c9a8876242ad61e3841c439177c3ae3a1967e5b053729107676134c622/diff:/var/lib/docker/overlay2/1f65ee5880c0ce6e027f2e72118cb956fcdcba9bb0806d98de6532bcbdfac6dd/diff:/var/lib/docker/overlay2/8bde131df5f26fa8ad65d4ddcb9ddb84de567427ffeb4aee76487489e530ef77/diff:/var/lib/docker/overlay2/73a68cda04f65298e5bd2581f0f16a8de96cd42145968d5c39b0c8ae3228a423/diff:/var/lib/docker/overlay2/0acfb650c0dfd3b9df005fe48f305b155d61d21914271c258e95bf42636e9bf8/diff:/var/lib/docker/overlay2/33e44b42ef88ffe285cb67e240248a629167dd1c83362d51a67679cbd866c30b/diff:/var/lib/docker/overlay2/c71ddf9e0475e4a3987bf3d616723e9d055a79c4bb6d92cf34fc67be2d9
c5134/diff:/var/lib/docker/overlay2/faece61d8cf45bab42cd499befcdb8d4548229311b35600cced068a3ce186025/diff:/var/lib/docker/overlay2/5e45db6ab7620f8baf25ff5ff01c806cd64575cdc89e5bb2965a25800093d5f9/diff:/var/lib/docker/overlay2/7324fe47bf39776cd61539f09f63fbb6f5e17aedc985bcc48bc12cdaaff4c4e4/diff:/var/lib/docker/overlay2/98a50fa9fa33654b2ed4ce8f7118f04d7004260519abca0d403b2ab337e9c273/diff:/var/lib/docker/overlay2/0e38707c109f3f1c54c7190021d525f898a86b07f2e51c86dc3e6d916eff3ed7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2f22434b154c126cc38d201a479347972bc497d40dabe8fdb45932c210d3a268/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2f22434b154c126cc38d201a479347972bc497d40dabe8fdb45932c210d3a268/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2f22434b154c126cc38d201a479347972bc497d40dabe8fdb45932c210d3a268/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20210816221913-6487",
	                "Source": "/var/lib/docker/volumes/embed-certs-20210816221913-6487/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20210816221913-6487",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20210816221913-6487",
	                "name.minikube.sigs.k8s.io": "embed-certs-20210816221913-6487",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "66c932cda8a09008d47d9a2d61331c28459055d0ba616daa818b44be955c6ed2",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32959"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32958"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32955"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32957"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32956"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/66c932cda8a0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20210816221913-6487": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "4e30df1bcd77"
	                    ],
	                    "NetworkID": "c3bd6b7609c0d09834ebe1c44b095ba7758b47f6dd42c7201a8fb39db16dfef9",
	                    "EndpointID": "1d8c47e872253088d5b9fa469cb566174b22dabfe6132ec5b8774d84dcb15b24",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210816221913-6487 -n embed-certs-20210816221913-6487
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210816221913-6487 -n embed-certs-20210816221913-6487: exit status 2 (377.140763ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20210816221913-6487 logs -n 25
E0816 22:28:02.070760    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816221555-6487/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p embed-certs-20210816221913-6487 logs -n 25: exit status 110 (10.861285183s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                         | disable-driver-mounts-20210816221938-6487      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:38 UTC | Mon, 16 Aug 2021 22:19:39 UTC |
	|         | disable-driver-mounts-20210816221938-6487                  |                                                |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:39 UTC | Mon, 16 Aug 2021 22:20:32 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:40 UTC | Mon, 16 Aug 2021 22:20:41 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:13 UTC | Mon, 16 Aug 2021 22:20:45 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:53 UTC | Mon, 16 Aug 2021 22:20:54 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:41 UTC | Mon, 16 Aug 2021 22:21:02 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:02 UTC | Mon, 16 Aug 2021 22:21:02 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| stop    | -p                                                         | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:54 UTC | Mon, 16 Aug 2021 22:21:15 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:15 UTC | Mon, 16 Aug 2021 22:21:15 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210816221555-6487                          | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:18:20 UTC | Mon, 16 Aug 2021 22:24:11 UTC |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:26 UTC | Mon, 16 Aug 2021 22:24:26 UTC |
	|         | no-preload-20210816221555-6487                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| -p      | no-preload-20210816221555-6487                             | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:29 UTC | Mon, 16 Aug 2021 22:24:29 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | no-preload-20210816221555-6487                             | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:30 UTC | Mon, 16 Aug 2021 22:24:31 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:32 UTC | Mon, 16 Aug 2021 22:24:35 UTC |
	|         | no-preload-20210816221555-6487                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:36 UTC | Mon, 16 Aug 2021 22:24:36 UTC |
	|         | no-preload-20210816221555-6487                             |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20210816222436-6487 --memory=2200            | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:36 UTC | Mon, 16 Aug 2021 22:25:25 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:25 UTC | Mon, 16 Aug 2021 22:25:25 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:25 UTC | Mon, 16 Aug 2021 22:25:46 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:46 UTC | Mon, 16 Aug 2021 22:25:46 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20210816222436-6487 --memory=2200            | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:46 UTC | Mon, 16 Aug 2021 22:26:11 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:26:12 UTC | Mon, 16 Aug 2021 22:26:12 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:02 UTC | Mon, 16 Aug 2021 22:26:46 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:26:56 UTC | Mon, 16 Aug 2021 22:26:57 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:15 UTC | Mon, 16 Aug 2021 22:27:35 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:27:46 UTC | Mon, 16 Aug 2021 22:27:46 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:25:46
	Running on machine: debian-jenkins-agent-13
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:25:46.856773  262957 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:25:46.856848  262957 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:25:46.856858  262957 out.go:311] Setting ErrFile to fd 2...
	I0816 22:25:46.856861  262957 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:25:46.856963  262957 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:25:46.857212  262957 out.go:305] Setting JSON to false
	I0816 22:25:46.893957  262957 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-13","uptime":3914,"bootTime":1629148833,"procs":365,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:25:46.894067  262957 start.go:121] virtualization: kvm guest
	I0816 22:25:46.896379  262957 out.go:177] * [newest-cni-20210816222436-6487] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:25:46.897973  262957 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:25:46.896522  262957 notify.go:169] Checking for updates...
	I0816 22:25:46.899468  262957 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:25:46.900988  262957 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:25:46.902492  262957 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:25:46.902900  262957 config.go:177] Loaded profile config "newest-cni-20210816222436-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0816 22:25:46.903274  262957 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:25:46.950656  262957 docker.go:132] docker version: linux-19.03.15
	I0816 22:25:46.950732  262957 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:25:47.034524  262957 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-16 22:25:46.986320519 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 22:25:47.034654  262957 docker.go:244] overlay module found
	I0816 22:25:47.037282  262957 out.go:177] * Using the docker driver based on existing profile
	I0816 22:25:47.037307  262957 start.go:278] selected driver: docker
	I0816 22:25:47.037313  262957 start.go:751] validating driver "docker" against &{Name:newest-cni-20210816222436-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816222436-6487 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true
extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:25:47.037417  262957 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0816 22:25:47.037459  262957 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:25:47.037480  262957 out.go:242] ! Your cgroup does not allow setting memory.
	I0816 22:25:47.039083  262957 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:25:47.040150  262957 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:25:47.119162  262957 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-16 22:25:47.075605257 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0816 22:25:47.119274  262957 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:25:47.119298  262957 out.go:242] ! Your cgroup does not allow setting memory.
	I0816 22:25:47.121212  262957 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:25:47.121330  262957 start_flags.go:716] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0816 22:25:47.121355  262957 cni.go:93] Creating CNI manager for ""
	I0816 22:25:47.121364  262957 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:25:47.121376  262957 start_flags.go:277] config:
	{Name:newest-cni-20210816222436-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816222436-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m
0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:25:47.123081  262957 out.go:177] * Starting control plane node newest-cni-20210816222436-6487 in cluster newest-cni-20210816222436-6487
	I0816 22:25:47.123113  262957 cache.go:117] Beginning downloading kic base image for docker with crio
	I0816 22:25:47.124788  262957 out.go:177] * Pulling base image ...
	I0816 22:25:47.124814  262957 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0816 22:25:47.124838  262957 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0816 22:25:47.124853  262957 cache.go:56] Caching tarball of preloaded images
	I0816 22:25:47.124910  262957 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0816 22:25:47.125039  262957 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 22:25:47.125058  262957 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on crio
	I0816 22:25:47.125170  262957 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/config.json ...
	I0816 22:25:47.212531  262957 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0816 22:25:47.212557  262957 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0816 22:25:47.212577  262957 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:25:47.212610  262957 start.go:313] acquiring machines lock for newest-cni-20210816222436-6487: {Name:mkd90dd1df90e2f23e61f524a3ae6e1a65dd1b39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:47.212710  262957 start.go:317] acquired machines lock for "newest-cni-20210816222436-6487" in 80.626µs
	I0816 22:25:47.212739  262957 start.go:93] Skipping create...Using existing machine configuration
	I0816 22:25:47.212748  262957 fix.go:55] fixHost starting: 
	I0816 22:25:47.212988  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:25:47.251771  262957 fix.go:108] recreateIfNeeded on newest-cni-20210816222436-6487: state=Stopped err=<nil>
	W0816 22:25:47.251798  262957 fix.go:134] unexpected machine state, will restart: <nil>
	I0816 22:25:44.995113  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:46.995369  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:47.650229  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:49.650872  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:47.254057  262957 out.go:177] * Restarting existing docker container for "newest-cni-20210816222436-6487" ...
	I0816 22:25:47.254120  262957 cli_runner.go:115] Run: docker start newest-cni-20210816222436-6487
	I0816 22:25:48.586029  262957 cli_runner.go:168] Completed: docker start newest-cni-20210816222436-6487: (1.33187871s)
	I0816 22:25:48.586111  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:25:48.626755  262957 kic.go:420] container "newest-cni-20210816222436-6487" state is running.
	I0816 22:25:48.627256  262957 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210816222436-6487
	I0816 22:25:48.670009  262957 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/config.json ...
	I0816 22:25:48.670233  262957 machine.go:88] provisioning docker machine ...
	I0816 22:25:48.670255  262957 ubuntu.go:169] provisioning hostname "newest-cni-20210816222436-6487"
	I0816 22:25:48.670309  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:48.711043  262957 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:48.711197  262957 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32969 <nil> <nil>}
	I0816 22:25:48.711217  262957 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20210816222436-6487 && echo "newest-cni-20210816222436-6487" | sudo tee /etc/hostname
	I0816 22:25:48.711815  262957 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48490->127.0.0.1:32969: read: connection reset by peer
	I0816 22:25:49.495358  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:51.496029  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:53.497145  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:51.907195  262957 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20210816222436-6487
	
	I0816 22:25:51.907262  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:51.946396  262957 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:51.946596  262957 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32969 <nil> <nil>}
	I0816 22:25:51.946627  262957 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20210816222436-6487' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20210816222436-6487/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20210816222436-6487' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:25:52.071168  262957 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:25:52.071196  262957 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem ServerCertRemotePath:/e
tc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:25:52.071223  262957 ubuntu.go:177] setting up certificates
	I0816 22:25:52.071234  262957 provision.go:83] configureAuth start
	I0816 22:25:52.071275  262957 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210816222436-6487
	I0816 22:25:52.110550  262957 provision.go:138] copyHostCerts
	I0816 22:25:52.110621  262957 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:25:52.110633  262957 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:25:52.110696  262957 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:25:52.110798  262957 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:25:52.110811  262957 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:25:52.110835  262957 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:25:52.110969  262957 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:25:52.110981  262957 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:25:52.111006  262957 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0816 22:25:52.111059  262957 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20210816222436-6487 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20210816222436-6487]
	I0816 22:25:52.355600  262957 provision.go:172] copyRemoteCerts
	I0816 22:25:52.355664  262957 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:25:52.355720  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:52.396113  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:25:52.486667  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:25:52.503265  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0816 22:25:52.518138  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 22:25:52.533106  262957 provision.go:86] duration metric: configureAuth took 461.862959ms
	I0816 22:25:52.533124  262957 ubuntu.go:193] setting minikube options for container-runtime
	I0816 22:25:52.533292  262957 config.go:177] Loaded profile config "newest-cni-20210816222436-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0816 22:25:52.533391  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:52.573329  262957 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:52.573496  262957 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32969 <nil> <nil>}
	I0816 22:25:52.573517  262957 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 22:25:52.991954  262957 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 22:25:52.991986  262957 machine.go:91] provisioned docker machine in 4.321739549s
	I0816 22:25:52.991996  262957 start.go:267] post-start starting for "newest-cni-20210816222436-6487" (driver="docker")
	I0816 22:25:52.992007  262957 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:25:52.992069  262957 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:25:52.992113  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:53.032158  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:25:53.123013  262957 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:25:53.125495  262957 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0816 22:25:53.125515  262957 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0816 22:25:53.125523  262957 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0816 22:25:53.125528  262957 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0816 22:25:53.125536  262957 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:25:53.125574  262957 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:25:53.125646  262957 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem -> 64872.pem in /etc/ssl/certs
	I0816 22:25:53.125746  262957 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:25:53.131911  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:25:53.149155  262957 start.go:270] post-start completed in 157.141514ms
	I0816 22:25:53.149220  262957 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 22:25:53.149270  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:53.190433  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:25:53.275867  262957 fix.go:57] fixHost completed within 6.063112205s
	I0816 22:25:53.275893  262957 start.go:80] releasing machines lock for "newest-cni-20210816222436-6487", held for 6.063163627s
	I0816 22:25:53.275995  262957 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210816222436-6487
	I0816 22:25:53.317483  262957 ssh_runner.go:149] Run: systemctl --version
	I0816 22:25:53.317538  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:53.317562  262957 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:25:53.317640  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:25:53.361517  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:25:53.362854  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:25:53.489151  262957 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0816 22:25:53.499402  262957 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0816 22:25:53.507663  262957 docker.go:153] disabling docker service ...
	I0816 22:25:53.507710  262957 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:25:53.515840  262957 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:25:53.523795  262957 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:25:53.582896  262957 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:25:53.644285  262957 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:25:53.653611  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:25:53.665218  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0816 22:25:53.672674  262957 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0816 22:25:53.672699  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0816 22:25:53.680934  262957 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:25:53.686723  262957 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:25:53.686773  262957 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:25:53.693222  262957 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:25:53.698990  262957 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:25:53.756392  262957 ssh_runner.go:149] Run: sudo systemctl start crio
	I0816 22:25:53.765202  262957 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 22:25:53.765252  262957 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:25:53.768154  262957 start.go:413] Will wait 60s for crictl version
	I0816 22:25:53.768197  262957 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:25:53.794195  262957 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0816 22:25:53.794262  262957 ssh_runner.go:149] Run: crio --version
	I0816 22:25:53.852537  262957 ssh_runner.go:149] Run: crio --version
	I0816 22:25:53.912084  262957 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on CRI-O 1.20.3 ...
	I0816 22:25:53.912160  262957 cli_runner.go:115] Run: docker network inspect newest-cni-20210816222436-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 22:25:53.950048  262957 ssh_runner.go:149] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0816 22:25:53.953262  262957 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:25:53.963781  262957 out.go:177]   - kubelet.network-plugin=cni
	I0816 22:25:52.154162  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:54.650414  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:53.965324  262957 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0816 22:25:53.965406  262957 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0816 22:25:53.965459  262957 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:25:53.993612  262957 crio.go:424] all images are preloaded for cri-o runtime.
	I0816 22:25:53.993630  262957 crio.go:333] Images already preloaded, skipping extraction
	I0816 22:25:53.993667  262957 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:25:54.020097  262957 crio.go:424] all images are preloaded for cri-o runtime.
	I0816 22:25:54.020118  262957 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:25:54.020180  262957 ssh_runner.go:149] Run: crio config
	I0816 22:25:54.082979  262957 cni.go:93] Creating CNI manager for ""
	I0816 22:25:54.083003  262957 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:25:54.083013  262957 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0816 22:25:54.083024  262957 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20210816222436-6487 NodeName:newest-cni-20210816222436-6487 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:fa
lse] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:25:54.083168  262957 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "newest-cni-20210816222436-6487"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:25:54.083284  262957 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20210816222436-6487 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816222436-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0816 22:25:54.083346  262957 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0816 22:25:54.090012  262957 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:25:54.090068  262957 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:25:54.096369  262957 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (603 bytes)
	I0816 22:25:54.107861  262957 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0816 22:25:54.119303  262957 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I0816 22:25:54.130633  262957 ssh_runner.go:149] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0816 22:25:54.133217  262957 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:25:54.141396  262957 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487 for IP: 192.168.67.2
	I0816 22:25:54.141447  262957 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:25:54.141471  262957 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:25:54.141535  262957 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/client.key
	I0816 22:25:54.141563  262957 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/apiserver.key.c7fa3a9e
	I0816 22:25:54.141596  262957 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/proxy-client.key
	I0816 22:25:54.141717  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem (1338 bytes)
	W0816 22:25:54.141762  262957 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487_empty.pem, impossibly tiny 0 bytes
	I0816 22:25:54.141774  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 22:25:54.141803  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:25:54.141827  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:25:54.141848  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0816 22:25:54.141897  262957 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:25:54.142744  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:25:54.158540  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 22:25:54.174181  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:25:54.190076  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816222436-6487/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 22:25:54.205410  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:25:54.220130  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 22:25:54.235298  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:25:54.251605  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 22:25:54.268123  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /usr/share/ca-certificates/64872.pem (1708 bytes)
	I0816 22:25:54.283499  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:25:54.298583  262957 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem --> /usr/share/ca-certificates/6487.pem (1338 bytes)
	I0816 22:25:54.314024  262957 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:25:54.325021  262957 ssh_runner.go:149] Run: openssl version
	I0816 22:25:54.329401  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/64872.pem && ln -fs /usr/share/ca-certificates/64872.pem /etc/ssl/certs/64872.pem"
	I0816 22:25:54.335940  262957 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/64872.pem
	I0816 22:25:54.338596  262957 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:50 /usr/share/ca-certificates/64872.pem
	I0816 22:25:54.338638  262957 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/64872.pem
	I0816 22:25:54.342906  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/64872.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:25:54.348826  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:25:54.358858  262957 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:25:54.361641  262957 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:25:54.361673  262957 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:25:54.365977  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:25:54.372154  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6487.pem && ln -fs /usr/share/ca-certificates/6487.pem /etc/ssl/certs/6487.pem"
	I0816 22:25:54.378858  262957 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6487.pem
	I0816 22:25:54.381576  262957 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:50 /usr/share/ca-certificates/6487.pem
	I0816 22:25:54.381623  262957 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6487.pem
	I0816 22:25:54.386036  262957 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6487.pem /etc/ssl/certs/51391683.0"
	I0816 22:25:54.391898  262957 kubeadm.go:390] StartCluster: {Name:newest-cni-20210816222436-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816222436-6487 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:
false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:25:54.392022  262957 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 22:25:54.392052  262957 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:25:54.414245  262957 cri.go:76] found id: ""
	I0816 22:25:54.414284  262957 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:25:54.420413  262957 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0816 22:25:54.420436  262957 kubeadm.go:600] restartCluster start
	I0816 22:25:54.420466  262957 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0816 22:25:54.426072  262957 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:54.426966  262957 kubeconfig.go:117] verify returned: extract IP: "newest-cni-20210816222436-6487" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:25:54.427382  262957 kubeconfig.go:128] "newest-cni-20210816222436-6487" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0816 22:25:54.428106  262957 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:25:54.430425  262957 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 22:25:54.436260  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:54.436301  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:54.447743  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:54.648124  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:54.648202  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:54.661570  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:54.848823  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:54.848884  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:54.862082  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.048130  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:55.048196  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:55.061645  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.247861  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:55.247956  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:55.262026  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.448347  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:55.448414  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:55.461467  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.648695  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:55.648774  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:55.661684  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.847947  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:55.848042  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:55.862542  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:56.048736  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:56.048800  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:56.061836  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:56.248110  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:56.248200  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:56.261360  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:56.448639  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:56.448705  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:56.461500  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:56.648623  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:56.648703  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:56.662181  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:55.995370  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:58.495829  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:56.651402  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:58.651440  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:56.848603  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:56.848665  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:56.861212  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.048524  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:57.048591  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:57.061580  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.248828  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:57.248911  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:57.261828  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.448121  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:57.448188  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:57.461171  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.461189  262957 api_server.go:164] Checking apiserver status ...
	I0816 22:25:57.461225  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:25:57.512239  262957 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.512268  262957 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0816 22:25:57.512276  262957 kubeadm.go:1032] stopping kube-system containers ...
	I0816 22:25:57.512288  262957 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 22:25:57.512336  262957 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:25:57.536298  262957 cri.go:76] found id: ""
	I0816 22:25:57.536370  262957 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0816 22:25:57.545155  262957 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:25:57.551792  262957 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5643 Aug 16 22:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug 16 22:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Aug 16 22:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 16 22:24 /etc/kubernetes/scheduler.conf
	
	I0816 22:25:57.551856  262957 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 22:25:57.558184  262957 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 22:25:57.564274  262957 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 22:25:57.570245  262957 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.570290  262957 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 22:25:57.576131  262957 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 22:25:57.582547  262957 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:57.582595  262957 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 22:25:57.588511  262957 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:25:57.594494  262957 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0816 22:25:57.594510  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:57.636811  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:58.457317  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:58.574142  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:58.631732  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:58.680441  262957 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:25:58.680500  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:59.195406  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:59.695787  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:00.195739  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:00.694833  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:01.194883  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:01.695030  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:00.496189  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:02.994975  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:01.151777  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:03.650774  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:02.195405  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:02.695613  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:03.195523  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:03.695735  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:04.195172  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:04.695313  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:05.194844  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:05.228254  262957 api_server.go:70] duration metric: took 6.54781428s to wait for apiserver process to appear ...
	I0816 22:26:05.228278  262957 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:26:05.228288  262957 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:26:04.995640  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:06.995858  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:08.521501  262957 api_server.go:265] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:26:08.521534  262957 api_server.go:101] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:26:09.022198  262957 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:26:09.028317  262957 api_server.go:265] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:26:09.028345  262957 api_server.go:101] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:26:09.521603  262957 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:26:09.526189  262957 api_server.go:265] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:26:09.526218  262957 api_server.go:101] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:26:10.021661  262957 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:26:10.027811  262957 api_server.go:265] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0816 22:26:10.035180  262957 api_server.go:139] control plane version: v1.22.0-rc.0
	I0816 22:26:10.035212  262957 api_server.go:129] duration metric: took 4.806927084s to wait for apiserver health ...
	I0816 22:26:10.035225  262957 cni.go:93] Creating CNI manager for ""
	I0816 22:26:10.035233  262957 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:26:09.964461  238595 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (30.898528802s)
	I0816 22:26:09.964528  238595 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:26:09.973997  238595 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 22:26:09.974062  238595 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:26:09.999861  238595 cri.go:76] found id: ""
	I0816 22:26:09.999951  238595 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:26:10.007018  238595 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0816 22:26:10.007067  238595 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:26:10.013415  238595 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:26:10.013459  238595 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0816 22:26:05.657515  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:08.152187  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:10.115193  262957 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 22:26:10.115266  262957 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0816 22:26:10.120908  262957 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl ...
	I0816 22:26:10.120933  262957 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0816 22:26:10.134935  262957 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 22:26:10.353050  262957 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:26:10.366285  262957 system_pods.go:59] 10 kube-system pods found
	I0816 22:26:10.366331  262957 system_pods.go:61] "coredns-78fcd69978-nqx44" [6fe4486f-609a-4711-8984-d211fafbc14a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:26:10.366343  262957 system_pods.go:61] "coredns-78fcd69978-sh8hf" [99ca4da4-63c0-4eb5-b1a9-824580994bf0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:26:10.366357  262957 system_pods.go:61] "etcd-newest-cni-20210816222436-6487" [3161f296-8eb0-48ae-8bbd-1a3104e0c5cc] Running
	I0816 22:26:10.366369  262957 system_pods.go:61] "kindnet-4wtm6" [f784c344-70ae-41f8-b749-4bd3d26179d1] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0816 22:26:10.366379  262957 system_pods.go:61] "kube-apiserver-newest-cni-20210816222436-6487" [b48447a9-bba3-4ef7-89ab-ee9a020ac10b] Running
	I0816 22:26:10.366393  262957 system_pods.go:61] "kube-controller-manager-newest-cni-20210816222436-6487" [0d5a549e-612d-463d-9383-0ee0d9dd2a5c] Running
	I0816 22:26:10.366402  262957 system_pods.go:61] "kube-proxy-242br" [91a06e4b-7a8f-4f7c-a698-3f40c4024f1f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 22:26:10.366411  262957 system_pods.go:61] "kube-scheduler-newest-cni-20210816222436-6487" [c46e8919-9bc3-4dbc-ba13-c0d0b8c2ee7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 22:26:10.366419  262957 system_pods.go:61] "metrics-server-7c784ccb57-j52xp" [8b98c23d-9fa2-44dd-b9af-b1bf3215cd88] Pending
	I0816 22:26:10.366427  262957 system_pods.go:61] "storage-provisioner" [a71ed147-1a32-4360-9bcc-722db25ff42e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:26:10.366434  262957 system_pods.go:74] duration metric: took 13.36244ms to wait for pod list to return data ...
	I0816 22:26:10.366443  262957 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:26:10.369938  262957 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:26:10.369965  262957 node_conditions.go:123] node cpu capacity is 8
	I0816 22:26:10.369980  262957 node_conditions.go:105] duration metric: took 3.531866ms to run NodePressure ...
	I0816 22:26:10.370000  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:26:10.628407  262957 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:26:10.646464  262957 ops.go:34] apiserver oom_adj: -16
	I0816 22:26:10.646488  262957 kubeadm.go:604] restartCluster took 16.226044614s
	I0816 22:26:10.646497  262957 kubeadm.go:392] StartCluster complete in 16.254606324s
	I0816 22:26:10.646519  262957 settings.go:142] acquiring lock: {Name:mk71c9e00e0a208f5191d6b85d29a074b46503a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:26:10.646648  262957 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:26:10.648250  262957 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:26:10.653233  262957 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20210816222436-6487" rescaled to 1
	I0816 22:26:10.653302  262957 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:26:10.653319  262957 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:26:10.653298  262957 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0816 22:26:10.653366  262957 addons.go:59] Setting storage-provisioner=true in profile "newest-cni-20210816222436-6487"
	I0816 22:26:10.653381  262957 addons.go:135] Setting addon storage-provisioner=true in "newest-cni-20210816222436-6487"
	W0816 22:26:10.653387  262957 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:26:10.653413  262957 host.go:66] Checking if "newest-cni-20210816222436-6487" exists ...
	I0816 22:26:10.653421  262957 addons.go:59] Setting default-storageclass=true in profile "newest-cni-20210816222436-6487"
	I0816 22:26:10.653452  262957 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20210816222436-6487"
	I0816 22:26:10.653502  262957 config.go:177] Loaded profile config "newest-cni-20210816222436-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0816 22:26:10.653557  262957 addons.go:59] Setting metrics-server=true in profile "newest-cni-20210816222436-6487"
	I0816 22:26:10.653574  262957 addons.go:135] Setting addon metrics-server=true in "newest-cni-20210816222436-6487"
	W0816 22:26:10.653581  262957 addons.go:147] addon metrics-server should already be in state true
	I0816 22:26:10.653607  262957 host.go:66] Checking if "newest-cni-20210816222436-6487" exists ...
	I0816 22:26:10.653741  262957 addons.go:59] Setting dashboard=true in profile "newest-cni-20210816222436-6487"
	I0816 22:26:10.653767  262957 addons.go:135] Setting addon dashboard=true in "newest-cni-20210816222436-6487"
	W0816 22:26:10.653776  262957 addons.go:147] addon dashboard should already be in state true
	I0816 22:26:10.653788  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:10.653811  262957 host.go:66] Checking if "newest-cni-20210816222436-6487" exists ...
	I0816 22:26:10.653952  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:10.654110  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:10.655941  262957 out.go:177] * Verifying Kubernetes components...
	I0816 22:26:10.654275  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:10.656048  262957 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:26:10.718128  262957 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:26:10.718268  262957 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:26:10.718285  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:26:10.718346  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:26:10.724823  262957 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:26:10.728589  262957 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:26:10.728694  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:26:10.728708  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:26:10.728778  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:26:10.372762  238595 out.go:204]   - Generating certificates and keys ...
	I0816 22:26:10.735593  262957 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:26:10.735667  262957 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:26:10.735676  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:26:10.735731  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:26:10.736243  262957 addons.go:135] Setting addon default-storageclass=true in "newest-cni-20210816222436-6487"
	W0816 22:26:10.736275  262957 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:26:10.736305  262957 host.go:66] Checking if "newest-cni-20210816222436-6487" exists ...
	I0816 22:26:10.736853  262957 cli_runner.go:115] Run: docker container inspect newest-cni-20210816222436-6487 --format={{.State.Status}}
	I0816 22:26:10.773177  262957 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:26:10.773242  262957 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:10.774623  262957 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0816 22:26:10.789622  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:26:10.795332  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:26:10.807716  262957 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:26:10.807742  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:26:10.807797  262957 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210816222436-6487
	I0816 22:26:10.818897  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:26:10.828432  262957 api_server.go:70] duration metric: took 175.046767ms to wait for apiserver process to appear ...
	I0816 22:26:10.828463  262957 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:26:10.828475  262957 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:26:10.835641  262957 api_server.go:265] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0816 22:26:10.836517  262957 api_server.go:139] control plane version: v1.22.0-rc.0
	I0816 22:26:10.836536  262957 api_server.go:129] duration metric: took 8.066334ms to wait for apiserver health ...
	I0816 22:26:10.836544  262957 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:26:10.844801  262957 system_pods.go:59] 10 kube-system pods found
	I0816 22:26:10.844830  262957 system_pods.go:61] "coredns-78fcd69978-nqx44" [6fe4486f-609a-4711-8984-d211fafbc14a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:26:10.844841  262957 system_pods.go:61] "coredns-78fcd69978-sh8hf" [99ca4da4-63c0-4eb5-b1a9-824580994bf0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:26:10.844849  262957 system_pods.go:61] "etcd-newest-cni-20210816222436-6487" [3161f296-8eb0-48ae-8bbd-1a3104e0c5cc] Running
	I0816 22:26:10.844862  262957 system_pods.go:61] "kindnet-4wtm6" [f784c344-70ae-41f8-b749-4bd3d26179d1] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0816 22:26:10.844871  262957 system_pods.go:61] "kube-apiserver-newest-cni-20210816222436-6487" [b48447a9-bba3-4ef7-89ab-ee9a020ac10b] Running
	I0816 22:26:10.844881  262957 system_pods.go:61] "kube-controller-manager-newest-cni-20210816222436-6487" [0d5a549e-612d-463d-9383-0ee0d9dd2a5c] Running
	I0816 22:26:10.844892  262957 system_pods.go:61] "kube-proxy-242br" [91a06e4b-7a8f-4f7c-a698-3f40c4024f1f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 22:26:10.844903  262957 system_pods.go:61] "kube-scheduler-newest-cni-20210816222436-6487" [c46e8919-9bc3-4dbc-ba13-c0d0b8c2ee7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 22:26:10.844920  262957 system_pods.go:61] "metrics-server-7c784ccb57-j52xp" [8b98c23d-9fa2-44dd-b9af-b1bf3215cd88] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:26:10.844930  262957 system_pods.go:61] "storage-provisioner" [a71ed147-1a32-4360-9bcc-722db25ff42e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:26:10.844937  262957 system_pods.go:74] duration metric: took 8.387271ms to wait for pod list to return data ...
	I0816 22:26:10.844948  262957 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:26:10.847353  262957 default_sa.go:45] found service account: "default"
	I0816 22:26:10.847370  262957 default_sa.go:55] duration metric: took 2.413533ms for default service account to be created ...
	I0816 22:26:10.847380  262957 kubeadm.go:547] duration metric: took 194.000457ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0816 22:26:10.847401  262957 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:26:10.849463  262957 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:26:10.849480  262957 node_conditions.go:123] node cpu capacity is 8
	I0816 22:26:10.849495  262957 node_conditions.go:105] duration metric: took 2.085396ms to run NodePressure ...
	I0816 22:26:10.849509  262957 start.go:231] waiting for startup goroutines ...
	I0816 22:26:10.862435  262957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816222436-6487/id_rsa Username:docker}
	I0816 22:26:10.919082  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:26:10.919107  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:26:10.928768  262957 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:26:10.928790  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:26:10.936226  262957 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:26:10.939559  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:26:10.939580  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:26:10.947378  262957 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:26:10.947440  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:26:10.956321  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:26:10.956344  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:26:10.959897  262957 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:26:11.016118  262957 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:26:11.016141  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:26:11.021575  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:26:11.021599  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:26:11.031871  262957 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:26:11.038943  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:26:11.038964  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:26:11.137497  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:26:11.137523  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:26:11.217513  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:26:11.217538  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:26:11.232958  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:26:11.232983  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:26:11.248587  262957 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:26:11.248612  262957 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:26:11.327831  262957 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:26:11.543579  262957 addons.go:313] Verifying addon metrics-server=true in "newest-cni-20210816222436-6487"
	I0816 22:26:11.719432  262957 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0816 22:26:11.719457  262957 addons.go:344] enableAddons completed in 1.066141103s
	I0816 22:26:11.764284  262957 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0816 22:26:11.765766  262957 out.go:177] 
	W0816 22:26:11.765889  262957 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0816 22:26:11.767364  262957 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0816 22:26:11.768745  262957 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210816222436-6487" cluster and "default" namespace by default
	I0816 22:26:10.885563  238595 out.go:204]   - Booting up control plane ...
	I0816 22:26:09.496573  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:11.995982  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:10.651451  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:13.151979  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:15.153271  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:13.996111  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:16.495078  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:18.495570  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:17.651242  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:20.151935  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:20.496216  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:22.996199  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:24.933950  238595 out.go:204]   - Configuring RBAC rules ...
	I0816 22:26:25.352976  238595 cni.go:93] Creating CNI manager for ""
	I0816 22:26:25.353002  238595 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:26:22.650222  240293 pod_ready.go:102] pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:23.146708  240293 pod_ready.go:81] duration metric: took 4m0.400635585s waiting for pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace to be "Ready" ...
	E0816 22:26:23.146730  240293 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-pqdqv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 22:26:23.146749  240293 pod_ready.go:38] duration metric: took 4m42.319875628s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:26:23.146776  240293 kubeadm.go:604] restartCluster took 4m59.914882197s
	W0816 22:26:23.146936  240293 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0816 22:26:23.146993  240293 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 22:26:25.355246  238595 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 22:26:25.355311  238595 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0816 22:26:25.358718  238595 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0816 22:26:25.358738  238595 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0816 22:26:25.370945  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 22:26:25.621157  238595 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:26:25.621206  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:25.621226  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=default-k8s-different-port-20210816221939-6487 minikube.k8s.io/updated_at=2021_08_16T22_26_25_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:25.733924  238595 ops.go:34] apiserver oom_adj: -16
	I0816 22:26:25.733912  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:26.298743  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:26.798723  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:27.298752  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:24.996387  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:27.495135  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:27.798667  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:28.298823  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:28.798898  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:29.299125  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:29.798939  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:30.298461  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:30.799163  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:31.298377  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:31.798518  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:32.299080  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:29.495517  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:31.495703  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:33.496362  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:32.798224  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:33.298433  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:33.799075  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:34.298503  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:34.798223  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:35.299182  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:35.798578  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:36.298228  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:36.798801  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:37.299144  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:35.996187  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:38.495700  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:37.798260  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:38.298197  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:38.798424  238595 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:26:38.917845  238595 kubeadm.go:985] duration metric: took 13.296684424s to wait for elevateKubeSystemPrivileges.
	I0816 22:26:38.917877  238595 kubeadm.go:392] StartCluster complete in 5m29.078278154s
	I0816 22:26:38.917895  238595 settings.go:142] acquiring lock: {Name:mk71c9e00e0a208f5191d6b85d29a074b46503a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:26:38.917976  238595 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:26:38.919347  238595 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:26:39.435280  238595 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20210816221939-6487" rescaled to 1
	I0816 22:26:39.435337  238595 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 22:26:39.436884  238595 out.go:177] * Verifying Kubernetes components...
	I0816 22:26:39.435381  238595 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:26:39.436944  238595 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:26:39.435407  238595 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:26:39.437054  238595 addons.go:59] Setting storage-provisioner=true in profile "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.437066  238595 addons.go:59] Setting dashboard=true in profile "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.437084  238595 addons.go:59] Setting default-storageclass=true in profile "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.437097  238595 addons.go:135] Setting addon dashboard=true in "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.437107  238595 addons.go:59] Setting metrics-server=true in profile "default-k8s-different-port-20210816221939-6487"
	W0816 22:26:39.437111  238595 addons.go:147] addon dashboard should already be in state true
	I0816 22:26:39.437119  238595 addons.go:135] Setting addon metrics-server=true in "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.435601  238595 config.go:177] Loaded profile config "default-k8s-different-port-20210816221939-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	W0816 22:26:39.437127  238595 addons.go:147] addon metrics-server should already be in state true
	I0816 22:26:39.437075  238595 addons.go:135] Setting addon storage-provisioner=true in "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.437147  238595 host.go:66] Checking if "default-k8s-different-port-20210816221939-6487" exists ...
	I0816 22:26:39.437156  238595 host.go:66] Checking if "default-k8s-different-port-20210816221939-6487" exists ...
	W0816 22:26:39.437157  238595 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:26:39.437098  238595 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:39.437219  238595 host.go:66] Checking if "default-k8s-different-port-20210816221939-6487" exists ...
	I0816 22:26:39.437580  238595 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210816221939-6487 --format={{.State.Status}}
	I0816 22:26:39.437673  238595 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210816221939-6487 --format={{.State.Status}}
	I0816 22:26:39.437680  238595 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210816221939-6487 --format={{.State.Status}}
	I0816 22:26:39.437786  238595 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210816221939-6487 --format={{.State.Status}}
	I0816 22:26:39.450925  238595 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20210816221939-6487" to be "Ready" ...
	I0816 22:26:39.454454  238595 node_ready.go:49] node "default-k8s-different-port-20210816221939-6487" has status "Ready":"True"
	I0816 22:26:39.454478  238595 node_ready.go:38] duration metric: took 3.504801ms waiting for node "default-k8s-different-port-20210816221939-6487" to be "Ready" ...
	I0816 22:26:39.454492  238595 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:26:39.461585  238595 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-n6ddn" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:39.496014  238595 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:26:39.496143  238595 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:26:39.496159  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:26:39.497741  238595 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:26:39.496222  238595 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210816221939-6487
	I0816 22:26:39.497808  238595 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:26:39.497821  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:26:39.497865  238595 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210816221939-6487
	I0816 22:26:39.499561  238595 addons.go:135] Setting addon default-storageclass=true in "default-k8s-different-port-20210816221939-6487"
	W0816 22:26:39.499598  238595 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:26:39.499623  238595 host.go:66] Checking if "default-k8s-different-port-20210816221939-6487" exists ...
	I0816 22:26:39.500057  238595 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210816221939-6487 --format={{.State.Status}}
	I0816 22:26:39.508968  238595 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:26:39.510786  238595 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:26:39.510877  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:26:39.510894  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:26:39.510963  238595 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210816221939-6487
	I0816 22:26:39.543137  238595 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:26:39.551327  238595 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:26:39.551354  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:26:39.551418  238595 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210816221939-6487
	I0816 22:26:39.562469  238595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816221939-6487/id_rsa Username:docker}
	I0816 22:26:39.567015  238595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816221939-6487/id_rsa Username:docker}
	I0816 22:26:39.585895  238595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816221939-6487/id_rsa Username:docker}
	I0816 22:26:39.601932  238595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816221939-6487/id_rsa Username:docker}
	I0816 22:26:39.730192  238595 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:26:39.730216  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:26:39.735004  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:26:39.735028  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:26:39.825712  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:26:39.825735  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:26:39.828025  238595 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:26:39.828046  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:26:39.829939  238595 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:26:39.830581  238595 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:26:39.917562  238595 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:26:39.917594  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:26:39.918416  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:26:39.918442  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:26:39.934239  238595 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:26:39.935303  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:26:39.935323  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:26:40.024142  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:26:40.024168  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:26:40.121870  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:26:40.121954  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:26:40.213303  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:26:40.213329  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:26:40.226600  238595 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0816 22:26:40.233649  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:26:40.233674  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:26:40.315993  238595 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:26:40.316021  238595 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:26:40.329860  238595 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:26:40.913110  238595 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.08249574s)
	I0816 22:26:41.119373  238595 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.185088873s)
	I0816 22:26:41.119413  238595 addons.go:313] Verifying addon metrics-server=true in "default-k8s-different-port-20210816221939-6487"
	I0816 22:26:41.513353  238595 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.183438758s)
	I0816 22:26:41.515520  238595 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0816 22:26:41.515560  238595 addons.go:344] enableAddons completed in 2.080164328s
	I0816 22:26:41.516293  238595 pod_ready.go:102] pod "coredns-558bd4d5db-n6ddn" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:40.996044  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:42.996463  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:43.970224  238595 pod_ready.go:102] pod "coredns-558bd4d5db-n6ddn" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:45.016130  238595 pod_ready.go:92] pod "coredns-558bd4d5db-n6ddn" in "kube-system" namespace has status "Ready":"True"
	I0816 22:26:45.016153  238595 pod_ready.go:81] duration metric: took 5.554536838s waiting for pod "coredns-558bd4d5db-n6ddn" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.016169  238595 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.020503  238595 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:26:45.020523  238595 pod_ready.go:81] duration metric: took 4.344641ms waiting for pod "etcd-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.020537  238595 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.024738  238595 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:26:45.024753  238595 pod_ready.go:81] duration metric: took 4.208942ms waiting for pod "kube-apiserver-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.024762  238595 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.028646  238595 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:26:45.028661  238595 pod_ready.go:81] duration metric: took 3.89128ms waiting for pod "kube-controller-manager-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.028670  238595 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4pmgn" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.032791  238595 pod_ready.go:92] pod "kube-proxy-4pmgn" in "kube-system" namespace has status "Ready":"True"
	I0816 22:26:45.032812  238595 pod_ready.go:81] duration metric: took 4.13529ms waiting for pod "kube-proxy-4pmgn" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.032823  238595 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.369533  238595 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:26:45.369559  238595 pod_ready.go:81] duration metric: took 336.726404ms waiting for pod "kube-scheduler-default-k8s-different-port-20210816221939-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:26:45.369571  238595 pod_ready.go:38] duration metric: took 5.915063438s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:26:45.369595  238595 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:26:45.369645  238595 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:45.395595  238595 api_server.go:70] duration metric: took 5.960222514s to wait for apiserver process to appear ...
	I0816 22:26:45.395625  238595 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:26:45.395637  238595 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0816 22:26:45.400217  238595 api_server.go:265] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0816 22:26:45.401067  238595 api_server.go:139] control plane version: v1.21.3
	I0816 22:26:45.401089  238595 api_server.go:129] duration metric: took 5.457124ms to wait for apiserver health ...
	I0816 22:26:45.401099  238595 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:26:45.570973  238595 system_pods.go:59] 9 kube-system pods found
	I0816 22:26:45.571001  238595 system_pods.go:61] "coredns-558bd4d5db-n6ddn" [6ea194b9-97b1-4f8b-b10a-e53c58d0b815] Running
	I0816 22:26:45.571006  238595 system_pods.go:61] "etcd-default-k8s-different-port-20210816221939-6487" [c8c2a683-c482-47ca-888f-c223fa5fc1a2] Running
	I0816 22:26:45.571016  238595 system_pods.go:61] "kindnet-5x8jh" [3ef671f3-edb1-4112-8340-37f83bc48660] Running
	I0816 22:26:45.571020  238595 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210816221939-6487" [25fe7b38-f39b-4718-956a-b47bd970723a] Running
	I0816 22:26:45.571025  238595 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210816221939-6487" [24ad0f39-a711-48df-9b8e-d9e0e08ee4f6] Running
	I0816 22:26:45.571028  238595 system_pods.go:61] "kube-proxy-4pmgn" [e049b17c-98b6-4692-8e5a-61c73abc97c2] Running
	I0816 22:26:45.571032  238595 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210816221939-6487" [2c0db59b-68b7-4831-be15-4f882953eec5] Running
	I0816 22:26:45.571039  238595 system_pods.go:61] "metrics-server-7c784ccb57-lfkmq" [d9309c70-8cf5-4fdc-a79f-1c85f9ceda55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:26:45.571069  238595 system_pods.go:61] "storage-provisioner" [f876c73e-e1dd-4f3c-9bad-866adba8a427] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:26:45.571074  238595 system_pods.go:74] duration metric: took 169.970426ms to wait for pod list to return data ...
	I0816 22:26:45.571085  238595 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:26:45.768620  238595 default_sa.go:45] found service account: "default"
	I0816 22:26:45.768644  238595 default_sa.go:55] duration metric: took 197.553773ms for default service account to be created ...
	I0816 22:26:45.768653  238595 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:26:45.970940  238595 system_pods.go:86] 9 kube-system pods found
	I0816 22:26:45.970973  238595 system_pods.go:89] "coredns-558bd4d5db-n6ddn" [6ea194b9-97b1-4f8b-b10a-e53c58d0b815] Running
	I0816 22:26:45.970982  238595 system_pods.go:89] "etcd-default-k8s-different-port-20210816221939-6487" [c8c2a683-c482-47ca-888f-c223fa5fc1a2] Running
	I0816 22:26:45.970987  238595 system_pods.go:89] "kindnet-5x8jh" [3ef671f3-edb1-4112-8340-37f83bc48660] Running
	I0816 22:26:45.970993  238595 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20210816221939-6487" [25fe7b38-f39b-4718-956a-b47bd970723a] Running
	I0816 22:26:45.971000  238595 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20210816221939-6487" [24ad0f39-a711-48df-9b8e-d9e0e08ee4f6] Running
	I0816 22:26:45.971006  238595 system_pods.go:89] "kube-proxy-4pmgn" [e049b17c-98b6-4692-8e5a-61c73abc97c2] Running
	I0816 22:26:45.971013  238595 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20210816221939-6487" [2c0db59b-68b7-4831-be15-4f882953eec5] Running
	I0816 22:26:45.971024  238595 system_pods.go:89] "metrics-server-7c784ccb57-lfkmq" [d9309c70-8cf5-4fdc-a79f-1c85f9ceda55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:26:45.971037  238595 system_pods.go:89] "storage-provisioner" [f876c73e-e1dd-4f3c-9bad-866adba8a427] Running
	I0816 22:26:45.971046  238595 system_pods.go:126] duration metric: took 202.387682ms to wait for k8s-apps to be running ...
	I0816 22:26:45.971061  238595 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:26:45.971104  238595 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:26:46.023089  238595 system_svc.go:56] duration metric: took 52.020591ms WaitForService to wait for kubelet.
	I0816 22:26:46.023116  238595 kubeadm.go:547] duration metric: took 6.587748491s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:26:46.023141  238595 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:26:46.168888  238595 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:26:46.168915  238595 node_conditions.go:123] node cpu capacity is 8
	I0816 22:26:46.168933  238595 node_conditions.go:105] duration metric: took 145.786239ms to run NodePressure ...
	I0816 22:26:46.168945  238595 start.go:231] waiting for startup goroutines ...
	I0816 22:26:46.211558  238595 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0816 22:26:46.214728  238595 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20210816221939-6487" cluster and "default" namespace by default
	I0816 22:26:45.495975  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:47.496653  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:49.995957  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:52.496048  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:54.204913  240293 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.057884699s)
	I0816 22:26:54.204974  240293 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:26:54.214048  240293 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 22:26:54.214110  240293 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:26:54.236967  240293 cri.go:76] found id: ""
	I0816 22:26:54.237019  240293 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:26:54.243553  240293 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0816 22:26:54.243606  240293 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:26:54.249971  240293 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:26:54.250416  240293 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0816 22:26:54.516364  240293 out.go:204]   - Generating certificates and keys ...
	I0816 22:26:55.249703  240293 out.go:204]   - Booting up control plane ...
	I0816 22:26:54.996103  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:57.495660  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:26:59.495743  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:01.995335  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:03.995379  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:05.995637  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:08.496092  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:09.298595  240293 out.go:204]   - Configuring RBAC rules ...
	I0816 22:27:09.713304  240293 cni.go:93] Creating CNI manager for ""
	I0816 22:27:09.713327  240293 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 22:27:09.715227  240293 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 22:27:09.715277  240293 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0816 22:27:09.718863  240293 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0816 22:27:09.718885  240293 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0816 22:27:09.731677  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 22:27:09.962283  240293 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:27:09.962350  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:09.962373  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=embed-certs-20210816221913-6487 minikube.k8s.io/updated_at=2021_08_16T22_27_09_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:10.060642  240293 ops.go:34] apiserver oom_adj: -16
	I0816 22:27:10.060723  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:10.995882  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:13.495974  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:10.633246  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:11.133139  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:11.633557  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:12.133518  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:12.633029  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:13.132949  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:13.632656  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:14.133534  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:14.632964  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:15.133130  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:15.496295  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:17.995970  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:15.632812  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:16.132692  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:16.633691  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:17.133141  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:17.632912  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:18.132865  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:18.633533  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:19.132892  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:19.632997  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:20.133122  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:20.496121  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:22.995237  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:20.633092  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:21.132697  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:21.632742  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:22.133291  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:22.632839  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:23.133425  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:23.632752  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:24.132877  240293 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:27:24.198432  240293 kubeadm.go:985] duration metric: took 14.236137948s to wait for elevateKubeSystemPrivileges.
	I0816 22:27:24.198462  240293 kubeadm.go:392] StartCluster complete in 6m0.995598802s
	I0816 22:27:24.198481  240293 settings.go:142] acquiring lock: {Name:mk71c9e00e0a208f5191d6b85d29a074b46503a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:27:24.198572  240293 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:27:24.200345  240293 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:27:24.715145  240293 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20210816221913-6487" rescaled to 1
	I0816 22:27:24.715193  240293 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 22:27:24.717805  240293 out.go:177] * Verifying Kubernetes components...
	I0816 22:27:24.717866  240293 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:27:24.715250  240293 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:27:24.715269  240293 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:27:24.717969  240293 addons.go:59] Setting storage-provisioner=true in profile "embed-certs-20210816221913-6487"
	I0816 22:27:24.717988  240293 addons.go:135] Setting addon storage-provisioner=true in "embed-certs-20210816221913-6487"
	W0816 22:27:24.717999  240293 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:27:24.718001  240293 addons.go:59] Setting default-storageclass=true in profile "embed-certs-20210816221913-6487"
	I0816 22:27:24.718022  240293 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20210816221913-6487"
	I0816 22:27:24.718032  240293 host.go:66] Checking if "embed-certs-20210816221913-6487" exists ...
	I0816 22:27:24.718039  240293 addons.go:59] Setting metrics-server=true in profile "embed-certs-20210816221913-6487"
	I0816 22:27:24.718052  240293 addons.go:135] Setting addon metrics-server=true in "embed-certs-20210816221913-6487"
	I0816 22:27:24.717986  240293 addons.go:59] Setting dashboard=true in profile "embed-certs-20210816221913-6487"
	I0816 22:27:24.718085  240293 addons.go:135] Setting addon dashboard=true in "embed-certs-20210816221913-6487"
	W0816 22:27:24.718100  240293 addons.go:147] addon dashboard should already be in state true
	I0816 22:27:24.718131  240293 host.go:66] Checking if "embed-certs-20210816221913-6487" exists ...
	I0816 22:27:24.718343  240293 cli_runner.go:115] Run: docker container inspect embed-certs-20210816221913-6487 --format={{.State.Status}}
	I0816 22:27:24.715429  240293 config.go:177] Loaded profile config "embed-certs-20210816221913-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	W0816 22:27:24.718059  240293 addons.go:147] addon metrics-server should already be in state true
	I0816 22:27:24.718417  240293 host.go:66] Checking if "embed-certs-20210816221913-6487" exists ...
	I0816 22:27:24.718547  240293 cli_runner.go:115] Run: docker container inspect embed-certs-20210816221913-6487 --format={{.State.Status}}
	I0816 22:27:24.718594  240293 cli_runner.go:115] Run: docker container inspect embed-certs-20210816221913-6487 --format={{.State.Status}}
	I0816 22:27:24.718818  240293 cli_runner.go:115] Run: docker container inspect embed-certs-20210816221913-6487 --format={{.State.Status}}
	I0816 22:27:24.782293  240293 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:27:24.783873  240293 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:27:24.782196  240293 addons.go:135] Setting addon default-storageclass=true in "embed-certs-20210816221913-6487"
	W0816 22:27:24.783987  240293 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:27:24.784020  240293 host.go:66] Checking if "embed-certs-20210816221913-6487" exists ...
	I0816 22:27:24.784033  240293 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:27:24.784044  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:27:24.785627  240293 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:27:24.785699  240293 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:27:24.785710  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:27:24.784098  240293 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:27:24.785767  240293 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:27:24.784669  240293 cli_runner.go:115] Run: docker container inspect embed-certs-20210816221913-6487 --format={{.State.Status}}
	I0816 22:27:24.787448  240293 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:27:24.787521  240293 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:27:24.787537  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:27:24.787582  240293 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:27:24.844134  240293 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20210816221913-6487" to be "Ready" ...
	I0816 22:27:24.844870  240293 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:27:24.854809  240293 node_ready.go:49] node "embed-certs-20210816221913-6487" has status "Ready":"True"
	I0816 22:27:24.854830  240293 node_ready.go:38] duration metric: took 10.664038ms waiting for node "embed-certs-20210816221913-6487" to be "Ready" ...
	I0816 22:27:24.854841  240293 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:27:24.855545  240293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32959 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:27:24.861143  240293 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-4zdn7" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:24.861336  240293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32959 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:27:24.863265  240293 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:27:24.863285  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:27:24.863344  240293 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210816221913-6487
	I0816 22:27:24.865862  240293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32959 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:27:24.902450  240293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32959 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816221913-6487/id_rsa Username:docker}
	I0816 22:27:25.213259  240293 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:27:25.213287  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:27:25.213568  240293 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:27:25.233517  240293 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:27:25.239365  240293 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:27:25.239389  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:27:25.313683  240293 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:27:25.313712  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:27:25.433541  240293 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:27:25.433568  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:27:25.434948  240293 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:27:25.434968  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:27:25.527034  240293 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:27:25.527059  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:27:25.613745  240293 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:27:25.613777  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:27:25.625813  240293 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:27:25.625851  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:27:25.713538  240293 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:27:25.726637  240293 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:27:25.726666  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:27:25.734858  240293 start.go:728] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0816 22:27:25.820941  240293 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:27:25.820971  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:27:25.840244  240293 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:27:25.840270  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:27:25.925179  240293 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:27:25.925202  240293 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:27:26.021980  240293 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:27:26.324641  240293 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.111035481s)
	I0816 22:27:26.324667  240293 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.091124904s)
	I0816 22:27:26.939142  240293 pod_ready.go:102] pod "coredns-558bd4d5db-4zdn7" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:27.022283  240293 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.30869814s)
	I0816 22:27:27.022370  240293 addons.go:313] Verifying addon metrics-server=true in "embed-certs-20210816221913-6487"
	I0816 22:27:27.431601  240293 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.409553263s)
	I0816 22:27:24.996042  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:27.495421  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:27.433693  240293 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0816 22:27:27.433723  240293 addons.go:344] enableAddons completed in 2.718461512s
	I0816 22:27:29.427787  240293 pod_ready.go:102] pod "coredns-558bd4d5db-4zdn7" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:29.496073  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:31.995232  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:31.927352  240293 pod_ready.go:102] pod "coredns-558bd4d5db-4zdn7" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:32.427442  240293 pod_ready.go:92] pod "coredns-558bd4d5db-4zdn7" in "kube-system" namespace has status "Ready":"True"
	I0816 22:27:32.427460  240293 pod_ready.go:81] duration metric: took 7.566292628s waiting for pod "coredns-558bd4d5db-4zdn7" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:32.427472  240293 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-tc25b" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:34.437803  240293 pod_ready.go:102] pod "coredns-558bd4d5db-tc25b" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:34.934910  240293 pod_ready.go:97] error getting pod "coredns-558bd4d5db-tc25b" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-tc25b" not found
	I0816 22:27:34.934937  240293 pod_ready.go:81] duration metric: took 2.507455875s waiting for pod "coredns-558bd4d5db-tc25b" in "kube-system" namespace to be "Ready" ...
	E0816 22:27:34.934947  240293 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-tc25b" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-tc25b" not found
	I0816 22:27:34.934954  240293 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:34.938786  240293 pod_ready.go:92] pod "etcd-embed-certs-20210816221913-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:27:34.938802  240293 pod_ready.go:81] duration metric: took 3.83976ms waiting for pod "etcd-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:34.938813  240293 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:34.945030  240293 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210816221913-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:27:34.945045  240293 pod_ready.go:81] duration metric: took 6.225501ms waiting for pod "kube-apiserver-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:34.945054  240293 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:34.948474  240293 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210816221913-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:27:34.948489  240293 pod_ready.go:81] duration metric: took 3.428771ms waiting for pod "kube-controller-manager-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:34.948497  240293 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hdhfc" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:34.951783  240293 pod_ready.go:92] pod "kube-proxy-hdhfc" in "kube-system" namespace has status "Ready":"True"
	I0816 22:27:34.951796  240293 pod_ready.go:81] duration metric: took 3.294223ms waiting for pod "kube-proxy-hdhfc" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:34.951803  240293 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:35.136382  240293 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210816221913-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:27:35.136401  240293 pod_ready.go:81] duration metric: took 184.590897ms waiting for pod "kube-scheduler-embed-certs-20210816221913-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:27:35.136410  240293 pod_ready.go:38] duration metric: took 10.281557269s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:27:35.136426  240293 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:27:35.136458  240293 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:27:35.159861  240293 api_server.go:70] duration metric: took 10.444645521s to wait for apiserver process to appear ...
	I0816 22:27:35.159888  240293 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:27:35.159899  240293 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0816 22:27:35.164341  240293 api_server.go:265] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0816 22:27:35.165220  240293 api_server.go:139] control plane version: v1.21.3
	I0816 22:27:35.165240  240293 api_server.go:129] duration metric: took 5.346619ms to wait for apiserver health ...
	I0816 22:27:35.165249  240293 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:27:35.339424  240293 system_pods.go:59] 9 kube-system pods found
	I0816 22:27:35.339458  240293 system_pods.go:61] "coredns-558bd4d5db-4zdn7" [2f84c841-a28b-41d0-b586-228464908707] Running
	I0816 22:27:35.339466  240293 system_pods.go:61] "etcd-embed-certs-20210816221913-6487" [a4640dda-3e6e-4007-a02c-4fe349e1157a] Running
	I0816 22:27:35.339472  240293 system_pods.go:61] "kindnet-7xdmw" [b333f4e6-e17c-4af3-96d9-00d5c0a566e2] Running
	I0816 22:27:35.339478  240293 system_pods.go:61] "kube-apiserver-embed-certs-20210816221913-6487" [3128b6ca-a978-4b60-b0af-573b750063c5] Running
	I0816 22:27:35.339485  240293 system_pods.go:61] "kube-controller-manager-embed-certs-20210816221913-6487" [ceb2b7da-4e1b-4cb9-a330-1d8e9ecc342f] Running
	I0816 22:27:35.339492  240293 system_pods.go:61] "kube-proxy-hdhfc" [785f8c4d-6231-44db-b97e-547d011c5c80] Running
	I0816 22:27:35.339497  240293 system_pods.go:61] "kube-scheduler-embed-certs-20210816221913-6487" [76384600-2c2f-4d18-b402-b66a7166b31d] Running
	I0816 22:27:35.339509  240293 system_pods.go:61] "metrics-server-7c784ccb57-jlfzn" [9f60dfdc-77e8-4afc-b707-d6f56ebe4cbd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:27:35.339527  240293 system_pods.go:61] "storage-provisioner" [58f573d9-42f2-462f-bb9b-966bd46af856] Running
	I0816 22:27:35.339535  240293 system_pods.go:74] duration metric: took 174.279391ms to wait for pod list to return data ...
	I0816 22:27:35.339548  240293 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:27:35.536578  240293 default_sa.go:45] found service account: "default"
	I0816 22:27:35.536602  240293 default_sa.go:55] duration metric: took 197.045764ms for default service account to be created ...
	I0816 22:27:35.536610  240293 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:27:35.738632  240293 system_pods.go:86] 9 kube-system pods found
	I0816 22:27:35.738661  240293 system_pods.go:89] "coredns-558bd4d5db-4zdn7" [2f84c841-a28b-41d0-b586-228464908707] Running
	I0816 22:27:35.738666  240293 system_pods.go:89] "etcd-embed-certs-20210816221913-6487" [a4640dda-3e6e-4007-a02c-4fe349e1157a] Running
	I0816 22:27:35.738671  240293 system_pods.go:89] "kindnet-7xdmw" [b333f4e6-e17c-4af3-96d9-00d5c0a566e2] Running
	I0816 22:27:35.738675  240293 system_pods.go:89] "kube-apiserver-embed-certs-20210816221913-6487" [3128b6ca-a978-4b60-b0af-573b750063c5] Running
	I0816 22:27:35.738681  240293 system_pods.go:89] "kube-controller-manager-embed-certs-20210816221913-6487" [ceb2b7da-4e1b-4cb9-a330-1d8e9ecc342f] Running
	I0816 22:27:35.738685  240293 system_pods.go:89] "kube-proxy-hdhfc" [785f8c4d-6231-44db-b97e-547d011c5c80] Running
	I0816 22:27:35.738689  240293 system_pods.go:89] "kube-scheduler-embed-certs-20210816221913-6487" [76384600-2c2f-4d18-b402-b66a7166b31d] Running
	I0816 22:27:35.738695  240293 system_pods.go:89] "metrics-server-7c784ccb57-jlfzn" [9f60dfdc-77e8-4afc-b707-d6f56ebe4cbd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:27:35.738700  240293 system_pods.go:89] "storage-provisioner" [58f573d9-42f2-462f-bb9b-966bd46af856] Running
	I0816 22:27:35.738707  240293 system_pods.go:126] duration metric: took 202.09278ms to wait for k8s-apps to be running ...
	I0816 22:27:35.738724  240293 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:27:35.738761  240293 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:27:35.748257  240293 system_svc.go:56] duration metric: took 9.52848ms WaitForService to wait for kubelet.
	I0816 22:27:35.748278  240293 kubeadm.go:547] duration metric: took 11.033066699s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:27:35.748301  240293 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:27:35.936039  240293 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:27:35.936064  240293 node_conditions.go:123] node cpu capacity is 8
	I0816 22:27:35.936078  240293 node_conditions.go:105] duration metric: took 187.771781ms to run NodePressure ...
	I0816 22:27:35.936087  240293 start.go:231] waiting for startup goroutines ...
	I0816 22:27:35.979326  240293 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0816 22:27:35.981602  240293 out.go:177] * Done! kubectl is now configured to use "embed-certs-20210816221913-6487" cluster and "default" namespace by default
	I0816 22:27:34.495967  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:36.995351  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:38.995682  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:41.495818  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:43.496112  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:45.995716  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:47.995858  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:50.495288  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:52.496078  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:54.995517  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	I0816 22:27:57.495028  213866 pod_ready.go:102] pod "metrics-server-8546d8b77b-pb7tf" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Mon 2021-08-16 22:21:17 UTC, end at Mon 2021-08-16 22:28:00 UTC. --
	Aug 16 22:27:28 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:28.570944043Z" level=info msg="Created container 47dc2b7452e70a03e9169b7a6aa763bf2f79612d522d7f9b27b49e095ea2773d: kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-587mw/kubernetes-dashboard" id=fce43c7d-36b3-4916-b344-1a4458be3b2f name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:27:28 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:28.571447186Z" level=info msg="Starting container: 47dc2b7452e70a03e9169b7a6aa763bf2f79612d522d7f9b27b49e095ea2773d" id=bc4d6c50-809a-450e-9a75-e00a5825de56 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:27:28 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:28.581129812Z" level=info msg="Started container 47dc2b7452e70a03e9169b7a6aa763bf2f79612d522d7f9b27b49e095ea2773d: kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-587mw/kubernetes-dashboard" id=bc4d6c50-809a-450e-9a75-e00a5825de56 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:27:28 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:28.593389521Z" level=info msg="Trying to access \"k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:27:33 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:33.431278204Z" level=info msg="Pulled image: k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" id=083bf632-adc2-4441-9c97-3a905ce720d9 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 16 22:27:33 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:33.432153626Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=43a65ed3-d63d-45b0-ba7c-fd264e2a88e0 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:27:33 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:33.433499458Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=43a65ed3-d63d-45b0-ba7c-fd264e2a88e0 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:27:33 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:33.434336523Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cghgx/dashboard-metrics-scraper" id=0c07bbc6-68d6-4464-b6c3-dc71c8d5d1c5 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:27:33 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:33.612486634Z" level=info msg="Created container 45c7134293f7e1696f2d98876ed3bfb3d1a7d6c835d0f4d63ea96a88833feb97: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cghgx/dashboard-metrics-scraper" id=0c07bbc6-68d6-4464-b6c3-dc71c8d5d1c5 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:27:33 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:33.613000475Z" level=info msg="Starting container: 45c7134293f7e1696f2d98876ed3bfb3d1a7d6c835d0f4d63ea96a88833feb97" id=8f271b29-bac0-4039-9ff8-baec7e77f8f1 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:27:33 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:33.636655040Z" level=info msg="Started container 45c7134293f7e1696f2d98876ed3bfb3d1a7d6c835d0f4d63ea96a88833feb97: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cghgx/dashboard-metrics-scraper" id=8f271b29-bac0-4039-9ff8-baec7e77f8f1 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:27:34 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:34.017783534Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=a64dcacc-ee66-48d0-9160-efb6ee96637d name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:27:34 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:34.019296627Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=a64dcacc-ee66-48d0-9160-efb6ee96637d name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:27:34 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:34.019935606Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=d3724e68-1a70-4708-bf20-d8860457212d name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:27:34 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:34.021647989Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=d3724e68-1a70-4708-bf20-d8860457212d name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:27:34 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:34.022458450Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cghgx/dashboard-metrics-scraper" id=620b16c7-b8c5-4bd1-bc7d-febb0e7c5c66 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:27:34 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:34.180481781Z" level=info msg="Created container 9e1ff151b67d583fb87bc64bdf44294c0a67ed2e20cf36a0baf2edc7843918cf: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cghgx/dashboard-metrics-scraper" id=620b16c7-b8c5-4bd1-bc7d-febb0e7c5c66 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:27:34 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:34.181048769Z" level=info msg="Starting container: 9e1ff151b67d583fb87bc64bdf44294c0a67ed2e20cf36a0baf2edc7843918cf" id=bd181b1f-e40b-4f9a-8215-db250357d236 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:27:34 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:34.205931119Z" level=info msg="Started container 9e1ff151b67d583fb87bc64bdf44294c0a67ed2e20cf36a0baf2edc7843918cf: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cghgx/dashboard-metrics-scraper" id=bd181b1f-e40b-4f9a-8215-db250357d236 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:27:35 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:35.021451543Z" level=info msg="Removing container: 45c7134293f7e1696f2d98876ed3bfb3d1a7d6c835d0f4d63ea96a88833feb97" id=e3476846-381d-4416-a663-e4ff0479657e name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 16 22:27:35 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:35.056285144Z" level=info msg="Removed container 45c7134293f7e1696f2d98876ed3bfb3d1a7d6c835d0f4d63ea96a88833feb97: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cghgx/dashboard-metrics-scraper" id=e3476846-381d-4416-a663-e4ff0479657e name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 16 22:27:40 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:40.908749876Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=ee12d614-becf-44f2-be7c-0aead71003c5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:27:40 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:40.909043324Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=ee12d614-becf-44f2-be7c-0aead71003c5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:27:40 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:40.909418865Z" level=info msg="Pulling image: fake.domain/k8s.gcr.io/echoserver:1.4" id=f3218cd1-953a-4ff5-b0f5-c6009329d052 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 16 22:27:40 embed-certs-20210816221913-6487 crio[243]: time="2021-08-16 22:27:40.927861801Z" level=info msg="Trying to access \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID
	9e1ff151b67d5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   26 seconds ago      Exited              dashboard-metrics-scraper   1                   a672a2744ad07
	47dc2b7452e70       9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db   32 seconds ago      Running             kubernetes-dashboard        0                   21bccc03a7df9
	debf85165af74       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   33 seconds ago      Exited              storage-provisioner         0                   3c01981e9f768
	0a0ea4978ab6c       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   34 seconds ago      Running             coredns                     0                   7975dbc93e3d2
	f33617d33e584       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb   35 seconds ago      Running             kindnet-cni                 0                   1796b1138290b
	0aa147fb51000       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   36 seconds ago      Running             kube-proxy                  0                   a8f8e3fd73e89
	8bf1c59231af0       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   58 seconds ago      Running             etcd                        0                   90aaa0ae8b4e7
	aa576929be7b6       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   58 seconds ago      Running             kube-controller-manager     0                   babc83d4e2713
	ec5f895255549       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   58 seconds ago      Running             kube-scheduler              0                   1e423b06a707c
	34a8effb725d1       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   58 seconds ago      Running             kube-apiserver              0                   10cce228f146c
	
	* 
	* ==> coredns [0a0ea4978ab6cd1089b9d06f0a278a1b0505d7d08360f002b22ad418383c54c6] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +0.895921] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000003] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +1.763890] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000003] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +0.831977] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-6a14296a1513
	[  +0.000003] ll header: 00000000: 02 42 33 e7 ff 29 02 42 c0 a8 31 02 08 00        .B3..).B..1...
	[  +2.811776] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000002] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +2.832077] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000003] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +4.335384] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000003] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[Aug16 22:27] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000003] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[ +13.663740] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth55ef9b3c
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 06 37 a8 8c 4d 9e 08 06        .......7..M...
	[  +2.163880] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethb864e10f
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 26 f5 50 a9 1a cc 08 06        ......&.P.....
	[  +0.707561] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth9c8775f6
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 4a 1b 78 c1 d0 58 08 06        ......J.x..X..
	[  +0.000675] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth6f717d76
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff aa 3a da 18 32 b9 08 06        .......:..2...
	[ +12.646052] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-c3bd6b7609c0
	[  +0.000002] ll header: 00000000: 02 42 80 fe e4 e0 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	
	* 
	* ==> etcd [8bf1c59231af08989a5db07139984cfdf2c0c9cf6fdc1d6e5bbf3f4a03bc5362] <==
	* raft2021/08/16 22:27:02 INFO: ea7e25599daad906 switched to configuration voters=(16896983918768216326)
	2021-08-16 22:27:02.329057 I | etcdserver/membership: added member ea7e25599daad906 [https://192.168.76.2:2380] to cluster 6f20f2c4b2fb5f8a
	2021-08-16 22:27:02.329658 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-16 22:27:02.329740 I | embed: listening for peers on 192.168.76.2:2380
	2021-08-16 22:27:02.329828 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/16 22:27:03 INFO: ea7e25599daad906 is starting a new election at term 1
	raft2021/08/16 22:27:03 INFO: ea7e25599daad906 became candidate at term 2
	raft2021/08/16 22:27:03 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2021/08/16 22:27:03 INFO: ea7e25599daad906 became leader at term 2
	raft2021/08/16 22:27:03 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2021-08-16 22:27:03.020939 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-16 22:27:03.021964 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-16 22:27:03.022035 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-16 22:27:03.022061 I | embed: ready to serve client requests
	2021-08-16 22:27:03.022078 I | etcdserver: published {Name:embed-certs-20210816221913-6487 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2021-08-16 22:27:03.022088 I | embed: ready to serve client requests
	2021-08-16 22:27:03.023447 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-16 22:27:03.023597 I | embed: serving client requests on 192.168.76.2:2379
	2021-08-16 22:27:16.361700 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (165.201581ms) to execute
	2021-08-16 22:27:16.361785 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-embed-certs-20210816221913-6487\" " with result "range_response_count:1 size:4006" took too long (194.651986ms) to execute
	2021-08-16 22:27:21.294639 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/deployment-controller\" " with result "range_response_count:0 size:5" took too long (114.403073ms) to execute
	2021-08-16 22:27:23.128156 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:27:26.372537 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:27:36.372225 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:27:46.371531 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  22:28:10 up  1:07,  0 users,  load average: 1.47, 2.33, 2.24
	Linux embed-certs-20210816221913-6487 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [34a8effb725d15ebb6c34b6f90d57dbc544e5ec2f8403d07ec1e1f6196fc373a] <==
	* I0816 22:27:06.713712       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0816 22:27:06.713814       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0816 22:27:07.531605       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0816 22:27:07.531627       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0816 22:27:07.536620       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0816 22:27:07.540238       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0816 22:27:07.540253       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0816 22:27:07.877048       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 22:27:07.929274       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0816 22:27:08.044065       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0816 22:27:08.045019       1 controller.go:611] quota admission added evaluator for: endpoints
	I0816 22:27:08.049101       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0816 22:27:09.115449       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0816 22:27:09.478093       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0816 22:27:09.535096       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0816 22:27:14.878779       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 22:27:23.534726       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0816 22:27:23.884521       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	W0816 22:27:29.020665       1 handler_proxy.go:102] no RequestInfo found in the context
	E0816 22:27:29.020766       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0816 22:27:29.020781       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 22:27:41.898139       1 client.go:360] parsed scheme: "passthrough"
	I0816 22:27:41.898177       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 22:27:41.898185       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [aa576929be7b6ab6d22ef3cb64aa2fa59c7f5ca84a125be69a9a4b61bf1a0ef7] <==
	* I0816 22:27:23.942787       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-4zdn7"
	I0816 22:27:24.216892       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0816 22:27:24.226769       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-tc25b"
	I0816 22:27:26.444068       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-7c784ccb57 to 1"
	I0816 22:27:26.522098       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0816 22:27:26.536428       1 replica_set.go:532] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0816 22:27:26.815020       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-jlfzn"
	I0816 22:27:27.018653       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0816 22:27:27.030965       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	I0816 22:27:27.033872       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:27:27.039891       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:27:27.041152       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:27:27.114622       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:27:27.115024       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:27:27.118457       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:27:27.125230       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:27:27.125282       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:27:27.125548       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:27:27.125552       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:27:27.129782       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:27:27.129836       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:27:27.133138       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:27:27.133208       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:27:27.214283       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-587mw"
	I0816 22:27:27.217209       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-cghgx"
	
	* 
	* ==> kube-proxy [0aa147fb5100088ef2b57609d0f2fcf92c5d3be6e41da177a1be6f4523451318] <==
	* I0816 22:27:25.015674       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0816 22:27:25.015729       1 server_others.go:140] Detected node IP 192.168.76.2
	W0816 22:27:25.015775       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0816 22:27:25.046502       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0816 22:27:25.046537       1 server_others.go:212] Using iptables Proxier.
	I0816 22:27:25.046547       1 server_others.go:219] creating dualStackProxier for iptables.
	W0816 22:27:25.046558       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0816 22:27:25.046841       1 server.go:643] Version: v1.21.3
	I0816 22:27:25.047971       1 config.go:315] Starting service config controller
	I0816 22:27:25.047998       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:27:25.048526       1 config.go:224] Starting endpoint slice config controller
	I0816 22:27:25.048540       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0816 22:27:25.113069       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0816 22:27:25.120869       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:27:25.212551       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0816 22:27:25.212599       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [ec5f895255549ca13063d25ca771f194444cccce85d22ec196e923f9c0520e16] <==
	* W0816 22:27:06.554631       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0816 22:27:06.554670       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 22:27:06.554690       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 22:27:06.554697       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 22:27:06.634790       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0816 22:27:06.635953       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 22:27:06.635978       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 22:27:06.636001       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0816 22:27:06.640597       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:27:06.640768       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 22:27:06.640886       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:27:06.640960       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:27:06.641026       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:27:06.644116       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:27:06.644186       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:27:06.644237       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:27:06.644277       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:27:06.644322       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:27:06.644357       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:27:06.644393       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:27:06.644437       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:27:06.714300       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:27:07.648700       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:27:07.668902       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0816 22:27:09.336894       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:21:17 UTC, end at Mon 2021-08-16 22:28:11 UTC. --
	Aug 16 22:27:27 embed-certs-20210816221913-6487 kubelet[5721]: I0816 22:27:27.332664    5721 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9cda492c-3ff6-4ef4-88a8-903a49b615b3-tmp-volume\") pod \"dashboard-metrics-scraper-8685c45546-cghgx\" (UID: \"9cda492c-3ff6-4ef4-88a8-903a49b615b3\") "
	Aug 16 22:27:27 embed-certs-20210816221913-6487 kubelet[5721]: E0816 22:27:27.831191    5721 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:27:27 embed-certs-20210816221913-6487 kubelet[5721]: E0816 22:27:27.831245    5721 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:27:27 embed-certs-20210816221913-6487 kubelet[5721]: E0816 22:27:27.831383    5721 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-dt8hf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler
{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]Vo
lumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-jlfzn_kube-system(9f60dfdc-77e8-4afc-b707-d6f56ebe4cbd): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 16 22:27:27 embed-certs-20210816221913-6487 kubelet[5721]: E0816 22:27:27.831435    5721 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-jlfzn" podUID=9f60dfdc-77e8-4afc-b707-d6f56ebe4cbd
	Aug 16 22:27:27 embed-certs-20210816221913-6487 kubelet[5721]: E0816 22:27:27.945791    5721 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-jlfzn" podUID=9f60dfdc-77e8-4afc-b707-d6f56ebe4cbd
	Aug 16 22:27:27 embed-certs-20210816221913-6487 kubelet[5721]: I0816 22:27:27.945988    5721 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	Aug 16 22:27:34 embed-certs-20210816221913-6487 kubelet[5721]: I0816 22:27:34.017301    5721 scope.go:111] "RemoveContainer" containerID="45c7134293f7e1696f2d98876ed3bfb3d1a7d6c835d0f4d63ea96a88833feb97"
	Aug 16 22:27:35 embed-certs-20210816221913-6487 kubelet[5721]: I0816 22:27:35.020455    5721 scope.go:111] "RemoveContainer" containerID="45c7134293f7e1696f2d98876ed3bfb3d1a7d6c835d0f4d63ea96a88833feb97"
	Aug 16 22:27:35 embed-certs-20210816221913-6487 kubelet[5721]: I0816 22:27:35.020596    5721 scope.go:111] "RemoveContainer" containerID="9e1ff151b67d583fb87bc64bdf44294c0a67ed2e20cf36a0baf2edc7843918cf"
	Aug 16 22:27:35 embed-certs-20210816221913-6487 kubelet[5721]: E0816 22:27:35.020942    5721 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-cghgx_kubernetes-dashboard(9cda492c-3ff6-4ef4-88a8-903a49b615b3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cghgx" podUID=9cda492c-3ff6-4ef4-88a8-903a49b615b3
	Aug 16 22:27:35 embed-certs-20210816221913-6487 kubelet[5721]: E0816 22:27:35.256190    5721 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/4e30df1bcd775444ec06b0393f66bea19168ae6b47cc3b771a3a176b222b381f/docker/4e30df1bcd775444ec06b0393f66bea19168ae6b47cc3b771a3a176b222b381f\": RecentStats: unable to find data in memory cache]"
	Aug 16 22:27:36 embed-certs-20210816221913-6487 kubelet[5721]: I0816 22:27:36.023821    5721 scope.go:111] "RemoveContainer" containerID="9e1ff151b67d583fb87bc64bdf44294c0a67ed2e20cf36a0baf2edc7843918cf"
	Aug 16 22:27:36 embed-certs-20210816221913-6487 kubelet[5721]: E0816 22:27:36.024284    5721 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-cghgx_kubernetes-dashboard(9cda492c-3ff6-4ef4-88a8-903a49b615b3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cghgx" podUID=9cda492c-3ff6-4ef4-88a8-903a49b615b3
	Aug 16 22:27:37 embed-certs-20210816221913-6487 kubelet[5721]: I0816 22:27:37.233453    5721 scope.go:111] "RemoveContainer" containerID="9e1ff151b67d583fb87bc64bdf44294c0a67ed2e20cf36a0baf2edc7843918cf"
	Aug 16 22:27:37 embed-certs-20210816221913-6487 kubelet[5721]: E0816 22:27:37.233743    5721 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-cghgx_kubernetes-dashboard(9cda492c-3ff6-4ef4-88a8-903a49b615b3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cghgx" podUID=9cda492c-3ff6-4ef4-88a8-903a49b615b3
	Aug 16 22:27:40 embed-certs-20210816221913-6487 kubelet[5721]: E0816 22:27:40.932349    5721 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:27:40 embed-certs-20210816221913-6487 kubelet[5721]: E0816 22:27:40.932387    5721 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:27:40 embed-certs-20210816221913-6487 kubelet[5721]: E0816 22:27:40.932498    5721 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-dt8hf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler
{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]Vo
lumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-jlfzn_kube-system(9f60dfdc-77e8-4afc-b707-d6f56ebe4cbd): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 16 22:27:40 embed-certs-20210816221913-6487 kubelet[5721]: E0816 22:27:40.932530    5721 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-jlfzn" podUID=9f60dfdc-77e8-4afc-b707-d6f56ebe4cbd
	Aug 16 22:27:45 embed-certs-20210816221913-6487 kubelet[5721]: E0816 22:27:45.355097    5721 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/4e30df1bcd775444ec06b0393f66bea19168ae6b47cc3b771a3a176b222b381f/docker/4e30df1bcd775444ec06b0393f66bea19168ae6b47cc3b771a3a176b222b381f\": RecentStats: unable to find data in memory cache]"
	Aug 16 22:27:47 embed-certs-20210816221913-6487 kubelet[5721]: I0816 22:27:47.080119    5721 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 16 22:27:47 embed-certs-20210816221913-6487 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 16 22:27:47 embed-certs-20210816221913-6487 systemd[1]: kubelet.service: Succeeded.
	Aug 16 22:27:47 embed-certs-20210816221913-6487 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [47dc2b7452e70a03e9169b7a6aa763bf2f79612d522d7f9b27b49e095ea2773d] <==
	* 2021/08/16 22:27:28 Starting overwatch
	2021/08/16 22:27:28 Using namespace: kubernetes-dashboard
	2021/08/16 22:27:28 Using in-cluster config to connect to apiserver
	2021/08/16 22:27:28 Using secret token for csrf signing
	2021/08/16 22:27:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/16 22:27:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/16 22:27:28 Successful initial request to the apiserver, version: v1.21.3
	2021/08/16 22:27:28 Generating JWE encryption key
	2021/08/16 22:27:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/16 22:27:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/16 22:27:28 Initializing JWE encryption key from synchronized object
	2021/08/16 22:27:28 Creating in-cluster Sidecar client
	2021/08/16 22:27:28 Serving insecurely on HTTP port: 9090
	2021/08/16 22:27:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [debf85165af7410de1807f28a939ac729e5268c28dd7659f8c7023882c3ca649] <==
	* k8s.io/client-go/util/workqueue.(*Type).Get(0xc000516600, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc0003e8c80, 0x18e5530, 0xc00004a0c0, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000280a60)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000280a60, 0x18b3d60, 0xc0003727e0, 0x1, 0xc00056a180)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000280a60, 0x3b9aca00, 0x0, 0x1, 0xc00056a180)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc000280a60, 0x3b9aca00, 0xc00056a180)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	goroutine 162 [runnable]:
	k8s.io/client-go/tools/record.(*recorderImpl).generateEvent.func1(0xc00051e2c0, 0xc000132c80)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/tools/record/event.go:341
	created by k8s.io/client-go/tools/record.(*recorderImpl).generateEvent
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/tools/record/event.go:341 +0x3b7
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:28:10.909594  277135 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (24.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (84.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20210816221528-6487 --alsologtostderr -v=1
E0816 22:29:11.452038    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/client.crt: no such file or directory
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-20210816221528-6487 --alsologtostderr -v=1: exit status 80 (1.996977213s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-20210816221528-6487 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 22:29:10.148260  290085 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:29:10.148377  290085 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:29:10.148391  290085 out.go:311] Setting ErrFile to fd 2...
	I0816 22:29:10.148396  290085 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:29:10.148546  290085 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:29:10.148733  290085 out.go:305] Setting JSON to false
	I0816 22:29:10.148754  290085 mustload.go:65] Loading cluster: old-k8s-version-20210816221528-6487
	I0816 22:29:10.149048  290085 config.go:177] Loaded profile config "old-k8s-version-20210816221528-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0816 22:29:10.149559  290085 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210816221528-6487 --format={{.State.Status}}
	I0816 22:29:10.190987  290085 host.go:66] Checking if "old-k8s-version-20210816221528-6487" exists ...
	I0816 22:29:10.191898  290085 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-20210816221528-6487 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0816 22:29:10.194980  290085 out.go:177] * Pausing node old-k8s-version-20210816221528-6487 ... 
	I0816 22:29:10.195004  290085 host.go:66] Checking if "old-k8s-version-20210816221528-6487" exists ...
	I0816 22:29:10.195210  290085 ssh_runner.go:149] Run: systemctl --version
	I0816 22:29:10.195242  290085 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210816221528-6487
	I0816 22:29:10.239577  290085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32929 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816221528-6487/id_rsa Username:docker}
	I0816 22:29:10.332015  290085 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:29:10.341277  290085 pause.go:50] kubelet running: true
	I0816 22:29:10.341346  290085 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0816 22:29:10.513264  290085 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0816 22:29:10.513371  290085 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0816 22:29:10.588267  290085 cri.go:76] found id: "10ed0c559670bc837ba359f0311f63a6421f80088a63de7509a9ec51ec991904"
	I0816 22:29:10.588297  290085 cri.go:76] found id: "9824eba2c3288da7218f49c3b45afa0fc7d2164956ff5f942d0295c3756a728c"
	I0816 22:29:10.588304  290085 cri.go:76] found id: "ee5a79b4037bd6b75469365df36deee1ed085da23bbe102b48016fc6e3ba8a9b"
	I0816 22:29:10.588310  290085 cri.go:76] found id: "573ba7ae7e9400419eedaf1a8a703ea83fd88f346bc0926601b8ced182e07bed"
	I0816 22:29:10.588315  290085 cri.go:76] found id: "68efe63d2b18a4657b5d62078100ef1b193a339d0b486472b1c85f1d4189e4ff"
	I0816 22:29:10.588322  290085 cri.go:76] found id: "39eab1fff2a03f36068c707a1a5ae682543a0f87a9d27daeb773edb072c84571"
	I0816 22:29:10.588328  290085 cri.go:76] found id: "5d9a6699a082709279178dd0fcfe86839cc48019194dd5952cc13c71fe9474db"
	I0816 22:29:10.588334  290085 cri.go:76] found id: "1646719043afc023ec9a9c6e546a9e5e1fa4a04854ab10ce7530b9bbe1c06030"
	I0816 22:29:10.588339  290085 cri.go:76] found id: "29cb6abc91eccce9c1ed3060df0bcf1712166b848ba6112f56dfa0a6c8b150a0"
	I0816 22:29:10.588351  290085 cri.go:76] found id: "fc1a6c3255410ca13e0379073ba0e17180576d92a0ea5b02a71aa3563c7f8f18"
	I0816 22:29:10.588361  290085 cri.go:76] found id: ""
	I0816 22:29:10.588394  290085 ssh_runner.go:149] Run: sudo runc list -f json

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p old-k8s-version-20210816221528-6487 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20210816221528-6487
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20210816221528-6487:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2c6179f59cfd0da59dc0d48131d8dac1fcbea61bd9724d9e65f6ee26edcd7d4c",
	        "Created": "2021-08-16T22:15:30.360296281Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 214144,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-16T22:17:44.551502469Z",
	            "FinishedAt": "2021-08-16T22:17:43.007475238Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/2c6179f59cfd0da59dc0d48131d8dac1fcbea61bd9724d9e65f6ee26edcd7d4c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2c6179f59cfd0da59dc0d48131d8dac1fcbea61bd9724d9e65f6ee26edcd7d4c/hostname",
	        "HostsPath": "/var/lib/docker/containers/2c6179f59cfd0da59dc0d48131d8dac1fcbea61bd9724d9e65f6ee26edcd7d4c/hosts",
	        "LogPath": "/var/lib/docker/containers/2c6179f59cfd0da59dc0d48131d8dac1fcbea61bd9724d9e65f6ee26edcd7d4c/2c6179f59cfd0da59dc0d48131d8dac1fcbea61bd9724d9e65f6ee26edcd7d4c-json.log",
	        "Name": "/old-k8s-version-20210816221528-6487",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20210816221528-6487:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20210816221528-6487",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7f675404c0f3aa5467be8c3ed12f9732feb4ba9a8414937f296fe8966821c42e-init/diff:/var/lib/docker/overlay2/26ccb05b45c39eb8fb9a222c35182ae0b2281818311893cc1c820d93f21711ae/diff:/var/lib/docker/overlay2/cd717b1332cc33945d2b9d777346faa34d6bdfa47023daf36a4373532eccf421/diff:/var/lib/docker/overlay2/d9b45e6cc99a52e8f81dfe312587e9103ceac6c138634c90ec0d647cd90dcdc6/diff:/var/lib/docker/overlay2/253a269adcaa7871d92fb351ceb4d32421104cd62f741ba1c63d8e3f2e19a0b5/diff:/var/lib/docker/overlay2/42adedc2a706910fdba591e62cbfab57dd03d85b882f6eef38c3e131a0c52bea/diff:/var/lib/docker/overlay2/77ed2008192a16e91da5c6b9ccc6e71cea338a7cb24d331dc695e5f74fb04573/diff:/var/lib/docker/overlay2/29dccf868a296ce0889c929159fd3ac0dcdc00ba6e2ef864b056335954ffab4d/diff:/var/lib/docker/overlay2/592818f0a9ab8c99c2438562d9c5a479083cb08f41ad30ddefa9b493a8098150/diff:/var/lib/docker/overlay2/05fa44df096fbdb60f935bef69e952369e10cd8f1adc570c170c1f08d09743f7/diff:/var/lib/docker/overlay2/65cf48
1c471f231a663edc6331272317a851b0fd3395b2d6be43899f9b5443f9/diff:/var/lib/docker/overlay2/86a49202a7a2abea1a7a57c52f2e19cf991e9319b11dcf7f4105892413e0096e/diff:/var/lib/docker/overlay2/6f1a2a1805e90af512eef163f484935d57b0a0a72a65b5775c38efb5fe937d13/diff:/var/lib/docker/overlay2/f514eb992d78367550d2ce1c020771f984eff047302d6af3dcd71172270c0087/diff:/var/lib/docker/overlay2/cdae383a0302a2ed835985028caad813d6e16c7caf0e146005b47a4a1e94369b/diff:/var/lib/docker/overlay2/1fe02987c4f18db439e026431c44d1b8b233da766c4dcd331a4c20fd91e88748/diff:/var/lib/docker/overlay2/28bb6b828668bb682f3523f9f71fbd429ef6180cd98455625ea014ef4e532b77/diff:/var/lib/docker/overlay2/2878ae01f9dccb219097c2fc5873be677fe8a70e2bd936d25f018483a3d6c1a3/diff:/var/lib/docker/overlay2/da86efa9bad2d8b46c88440b1c4864add391f7cc9ee07d4227dd37d76c9ffd85/diff:/var/lib/docker/overlay2/93bf17ee1f2be70c70aa38b3aea9daa3a62e189181d1aa56acc52f5cf90f7436/diff:/var/lib/docker/overlay2/e8db603cd7ef789cc310bf7782375ee624d80ac0bceb3a3075900e67540c152a/diff:/var/lib/d
ocker/overlay2/1ba1c50d72480f49b835b34db6c3b21bccdf6530a8201631099a7e287dbf94a6/diff:/var/lib/docker/overlay2/2eeb0aecf508bc4a797d240b330bfa5b54b84a085737a96dad9faf9ea129d4ce/diff:/var/lib/docker/overlay2/40ba0dc970df76cce520350594f86384e121b47a190106559752e8f7cf138540/diff:/var/lib/docker/overlay2/8b80e2d0e53fb136919fd9ca7d4d198a92c324a19aa1827fcd672a3aa49a0dcc/diff:/var/lib/docker/overlay2/bd1d10c9a8876242ad61e3841c439177c3ae3a1967e5b053729107676134c622/diff:/var/lib/docker/overlay2/1f65ee5880c0ce6e027f2e72118cb956fcdcba9bb0806d98de6532bcbdfac6dd/diff:/var/lib/docker/overlay2/8bde131df5f26fa8ad65d4ddcb9ddb84de567427ffeb4aee76487489e530ef77/diff:/var/lib/docker/overlay2/73a68cda04f65298e5bd2581f0f16a8de96cd42145968d5c39b0c8ae3228a423/diff:/var/lib/docker/overlay2/0acfb650c0dfd3b9df005fe48f305b155d61d21914271c258e95bf42636e9bf8/diff:/var/lib/docker/overlay2/33e44b42ef88ffe285cb67e240248a629167dd1c83362d51a67679cbd866c30b/diff:/var/lib/docker/overlay2/c71ddf9e0475e4a3987bf3d616723e9d055a79c4bb6d92cf34fc67be2d9
c5134/diff:/var/lib/docker/overlay2/faece61d8cf45bab42cd499befcdb8d4548229311b35600cced068a3ce186025/diff:/var/lib/docker/overlay2/5e45db6ab7620f8baf25ff5ff01c806cd64575cdc89e5bb2965a25800093d5f9/diff:/var/lib/docker/overlay2/7324fe47bf39776cd61539f09f63fbb6f5e17aedc985bcc48bc12cdaaff4c4e4/diff:/var/lib/docker/overlay2/98a50fa9fa33654b2ed4ce8f7118f04d7004260519abca0d403b2ab337e9c273/diff:/var/lib/docker/overlay2/0e38707c109f3f1c54c7190021d525f898a86b07f2e51c86dc3e6d916eff3ed7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7f675404c0f3aa5467be8c3ed12f9732feb4ba9a8414937f296fe8966821c42e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7f675404c0f3aa5467be8c3ed12f9732feb4ba9a8414937f296fe8966821c42e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7f675404c0f3aa5467be8c3ed12f9732feb4ba9a8414937f296fe8966821c42e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20210816221528-6487",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20210816221528-6487/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20210816221528-6487",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20210816221528-6487",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20210816221528-6487",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f953e5a42aefe93ff936ae1ccb6e431e9b8ee88db4d57d588759e20ec213f770",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32929"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32928"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32925"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32927"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32926"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f953e5a42aef",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20210816221528-6487": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2c6179f59cfd"
	                    ],
	                    "NetworkID": "4ed2783b447d2bd79ed0b03bc7819d26847626cfb8bbf7b3d91e9b95dcd18515",
	                    "EndpointID": "bc955fb0f881e5f66e6a1389b23de0f62cddbe9fabcbdfe55a25e530213af001",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210816221528-6487 -n old-k8s-version-20210816221528-6487
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210816221528-6487 -n old-k8s-version-20210816221528-6487: exit status 2 (379.944182ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20210816221528-6487 logs -n 25
E0816 22:29:13.752984    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816221555-6487/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p old-k8s-version-20210816221528-6487 logs -n 25: exit status 110 (40.891600873s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| addons  | enable dashboard -p                                        | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:15 UTC | Mon, 16 Aug 2021 22:21:15 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210816221555-6487                          | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:18:20 UTC | Mon, 16 Aug 2021 22:24:11 UTC |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:26 UTC | Mon, 16 Aug 2021 22:24:26 UTC |
	|         | no-preload-20210816221555-6487                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| -p      | no-preload-20210816221555-6487                             | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:29 UTC | Mon, 16 Aug 2021 22:24:29 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | no-preload-20210816221555-6487                             | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:30 UTC | Mon, 16 Aug 2021 22:24:31 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:32 UTC | Mon, 16 Aug 2021 22:24:35 UTC |
	|         | no-preload-20210816221555-6487                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:36 UTC | Mon, 16 Aug 2021 22:24:36 UTC |
	|         | no-preload-20210816221555-6487                             |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20210816222436-6487 --memory=2200            | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:36 UTC | Mon, 16 Aug 2021 22:25:25 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:25 UTC | Mon, 16 Aug 2021 22:25:25 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:25 UTC | Mon, 16 Aug 2021 22:25:46 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:46 UTC | Mon, 16 Aug 2021 22:25:46 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20210816222436-6487 --memory=2200            | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:46 UTC | Mon, 16 Aug 2021 22:26:11 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:26:12 UTC | Mon, 16 Aug 2021 22:26:12 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:02 UTC | Mon, 16 Aug 2021 22:26:46 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:26:56 UTC | Mon, 16 Aug 2021 22:26:57 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:15 UTC | Mon, 16 Aug 2021 22:27:35 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:27:46 UTC | Mon, 16 Aug 2021 22:27:46 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:28:07 UTC | Mon, 16 Aug 2021 22:28:11 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:28:11 UTC | Mon, 16 Aug 2021 22:28:12 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:28:11 UTC | Mon, 16 Aug 2021 22:28:15 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:28:15 UTC | Mon, 16 Aug 2021 22:28:16 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	| start   | -p                                                         | old-k8s-version-20210816221528-6487            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:17:43 UTC | Mon, 16 Aug 2021 22:28:59 UTC |
	|         | old-k8s-version-20210816221528-6487                        |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                |         |         |                               |                               |
	|         | --disable-driver-mounts                                    |                                                |         |         |                               |                               |
	|         | --keep-context=false                                       |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                               |                                                |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:28:46 UTC | Mon, 16 Aug 2021 22:29:00 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:29:00 UTC | Mon, 16 Aug 2021 22:29:01 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | old-k8s-version-20210816221528-6487            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:29:09 UTC | Mon, 16 Aug 2021 22:29:10 UTC |
	|         | old-k8s-version-20210816221528-6487                        |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:29:01
	Running on machine: debian-jenkins-agent-13
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:29:01.104972  287041 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:29:01.105042  287041 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:29:01.105046  287041 out.go:311] Setting ErrFile to fd 2...
	I0816 22:29:01.105049  287041 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:29:01.105176  287041 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:29:01.105429  287041 out.go:305] Setting JSON to false
	I0816 22:29:01.147736  287041 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-13","uptime":4108,"bootTime":1629148833,"procs":304,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:29:01.147869  287041 start.go:121] virtualization: kvm guest
	I0816 22:29:01.150177  287041 out.go:177] * [enable-default-cni-20210816221527-6487] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:29:01.151769  287041 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:29:01.150341  287041 notify.go:169] Checking for updates...
	I0816 22:29:01.153187  287041 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:29:01.154624  287041 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:28:57.951171  280208 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (3.155815961s)
	I0816 22:28:58.294590  280208 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:28:59.294880  280208 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:28:59.795351  280208 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:00.295190  280208 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:00.794850  280208 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:01.155967  287041 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:29:01.156497  287041 config.go:177] Loaded profile config "auto-20210816221527-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 22:29:01.156588  287041 config.go:177] Loaded profile config "kindnet-20210816221528-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 22:29:01.156694  287041 config.go:177] Loaded profile config "old-k8s-version-20210816221528-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0816 22:29:01.156731  287041 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:29:01.211752  287041 docker.go:132] docker version: linux-19.03.15
	I0816 22:29:01.211835  287041 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:29:01.310520  287041 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-16 22:29:01.260643746 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 22:29:01.310607  287041 docker.go:244] overlay module found
	I0816 22:29:01.312898  287041 out.go:177] * Using the docker driver based on user configuration
	I0816 22:29:01.312931  287041 start.go:278] selected driver: docker
	I0816 22:29:01.312936  287041 start.go:751] validating driver "docker" against <nil>
	I0816 22:29:01.312959  287041 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0816 22:29:01.313002  287041 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:29:01.313039  287041 out.go:242] ! Your cgroup does not allow setting memory.
	I0816 22:29:01.314964  287041 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:29:01.316997  287041 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:29:01.414755  287041 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-16 22:29:01.362116027 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 22:29:01.414867  287041 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	E0816 22:29:01.414994  287041 start_flags.go:390] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0816 22:29:01.415019  287041 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 22:29:01.415046  287041 cni.go:93] Creating CNI manager for "bridge"
	I0816 22:29:01.415057  287041 start_flags.go:272] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 22:29:01.415071  287041 start_flags.go:277] config:
	{Name:enable-default-cni-20210816221527-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:enable-default-cni-20210816221527-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:29:01.417311  287041 out.go:177] * Starting control plane node enable-default-cni-20210816221527-6487 in cluster enable-default-cni-20210816221527-6487
	I0816 22:29:01.417362  287041 cache.go:117] Beginning downloading kic base image for docker with crio
	I0816 22:29:01.419157  287041 out.go:177] * Pulling base image ...
	I0816 22:29:01.419191  287041 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0816 22:29:01.419222  287041 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0816 22:29:01.419234  287041 cache.go:56] Caching tarball of preloaded images
	I0816 22:29:01.419286  287041 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0816 22:29:01.419454  287041 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 22:29:01.419470  287041 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0816 22:29:01.419553  287041 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/config.json ...
	I0816 22:29:01.419579  287041 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/config.json: {Name:mk677dd39d86b1fa44630c8acfcbb06dfda0323d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:29:01.519535  287041 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0816 22:29:01.519561  287041 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0816 22:29:01.519570  287041 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:29:01.519606  287041 start.go:313] acquiring machines lock for enable-default-cni-20210816221527-6487: {Name:mkb4ba53a6c846dcc3a3f0e6e8acfdcda1ff27bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:29:01.519728  287041 start.go:317] acquired machines lock for "enable-default-cni-20210816221527-6487" in 99.625µs
	I0816 22:29:01.519754  287041 start.go:89] Provisioning new machine with config: &{Name:enable-default-cni-20210816221527-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:enable-default-cni-20210816221527-6487 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 22:29:01.519820  287041 start.go:126] createHost starting for "" (driver="docker")
	I0816 22:28:57.938613  278507 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (4.810624549s)
	I0816 22:28:58.128008  278507 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:28:59.172780  278507 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.04473644s)
	I0816 22:28:59.628477  278507 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:00.128201  278507 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:00.627692  278507 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:01.128019  278507 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:01.627836  278507 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:01.295290  280208 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:01.795470  280208 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:01.871216  280208 kubeadm.go:985] duration metric: took 18.767227943s to wait for elevateKubeSystemPrivileges.
	I0816 22:29:01.871243  280208 kubeadm.go:392] StartCluster complete in 36.933306492s
	I0816 22:29:01.871262  280208 settings.go:142] acquiring lock: {Name:mk71c9e00e0a208f5191d6b85d29a074b46503a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:29:01.871365  280208 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:29:01.873049  280208 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:29:02.393143  280208 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kindnet-20210816221528-6487" rescaled to 1
	I0816 22:29:02.393200  280208 start.go:226] Will wait 5m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 22:29:02.394696  280208 out.go:177] * Verifying Kubernetes components...
	I0816 22:29:02.394760  280208 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:29:02.393267  280208 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:29:02.393285  280208 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0816 22:29:02.394877  280208 addons.go:59] Setting storage-provisioner=true in profile "kindnet-20210816221528-6487"
	I0816 22:29:02.394897  280208 addons.go:135] Setting addon storage-provisioner=true in "kindnet-20210816221528-6487"
	W0816 22:29:02.394907  280208 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:29:02.393461  280208 config.go:177] Loaded profile config "kindnet-20210816221528-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 22:29:02.394940  280208 host.go:66] Checking if "kindnet-20210816221528-6487" exists ...
	I0816 22:29:02.394944  280208 addons.go:59] Setting default-storageclass=true in profile "kindnet-20210816221528-6487"
	I0816 22:29:02.394964  280208 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-20210816221528-6487"
	I0816 22:29:02.395341  280208 cli_runner.go:115] Run: docker container inspect kindnet-20210816221528-6487 --format={{.State.Status}}
	I0816 22:29:02.395544  280208 cli_runner.go:115] Run: docker container inspect kindnet-20210816221528-6487 --format={{.State.Status}}
	I0816 22:29:02.128501  278507 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:02.221985  278507 kubeadm.go:985] duration metric: took 21.7915582s to wait for elevateKubeSystemPrivileges.
	I0816 22:29:02.222017  278507 kubeadm.go:392] StartCluster complete in 41.069618548s
	I0816 22:29:02.222038  278507 settings.go:142] acquiring lock: {Name:mk71c9e00e0a208f5191d6b85d29a074b46503a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:29:02.222137  278507 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:29:02.224253  278507 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:29:02.742147  278507 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "auto-20210816221527-6487" rescaled to 1
	I0816 22:29:02.742218  278507 start.go:226] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 22:29:02.744670  278507 out.go:177] * Verifying Kubernetes components...
	I0816 22:29:02.742382  278507 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:29:02.742408  278507 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0816 22:29:02.742579  278507 config.go:177] Loaded profile config "auto-20210816221527-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 22:29:02.744778  278507 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:29:02.744911  278507 addons.go:59] Setting storage-provisioner=true in profile "auto-20210816221527-6487"
	I0816 22:29:02.744928  278507 addons.go:135] Setting addon storage-provisioner=true in "auto-20210816221527-6487"
	W0816 22:29:02.744934  278507 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:29:02.744960  278507 host.go:66] Checking if "auto-20210816221527-6487" exists ...
	I0816 22:29:02.745255  278507 addons.go:59] Setting default-storageclass=true in profile "auto-20210816221527-6487"
	I0816 22:29:02.745276  278507 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-20210816221527-6487"
	I0816 22:29:02.745532  278507 cli_runner.go:115] Run: docker container inspect auto-20210816221527-6487 --format={{.State.Status}}
	I0816 22:29:02.745562  278507 cli_runner.go:115] Run: docker container inspect auto-20210816221527-6487 --format={{.State.Status}}
	I0816 22:29:02.833299  278507 addons.go:135] Setting addon default-storageclass=true in "auto-20210816221527-6487"
	W0816 22:29:02.833375  278507 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:29:02.833418  278507 host.go:66] Checking if "auto-20210816221527-6487" exists ...
	I0816 22:29:02.834044  278507 cli_runner.go:115] Run: docker container inspect auto-20210816221527-6487 --format={{.State.Status}}
	I0816 22:29:02.461313  280208 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:29:02.461436  280208 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:29:02.461451  280208 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:29:02.461535  280208 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210816221528-6487
	I0816 22:29:02.470273  280208 addons.go:135] Setting addon default-storageclass=true in "kindnet-20210816221528-6487"
	W0816 22:29:02.470302  280208 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:29:02.470329  280208 host.go:66] Checking if "kindnet-20210816221528-6487" exists ...
	I0816 22:29:02.470842  280208 cli_runner.go:115] Run: docker container inspect kindnet-20210816221528-6487 --format={{.State.Status}}
	I0816 22:29:02.517492  280208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kindnet-20210816221528-6487/id_rsa Username:docker}
	I0816 22:29:02.520548  280208 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:29:02.521024  280208 node_ready.go:35] waiting up to 5m0s for node "kindnet-20210816221528-6487" to be "Ready" ...
	I0816 22:29:02.525092  280208 node_ready.go:49] node "kindnet-20210816221528-6487" has status "Ready":"True"
	I0816 22:29:02.525113  280208 node_ready.go:38] duration metric: took 4.05258ms waiting for node "kindnet-20210816221528-6487" to be "Ready" ...
	I0816 22:29:02.525124  280208 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:29:02.531556  280208 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:29:02.531577  280208 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:29:02.531633  280208 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210816221528-6487
	I0816 22:29:02.537178  280208 pod_ready.go:78] waiting up to 5m0s for pod "coredns-558bd4d5db-gkxqs" in "kube-system" namespace to be "Ready" ...
	I0816 22:29:02.603194  280208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kindnet-20210816221528-6487/id_rsa Username:docker}
	I0816 22:29:02.660123  280208 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:29:02.732946  280208 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:29:03.221238  280208 start.go:728] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0816 22:29:02.853372  278507 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:29:02.853501  278507 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:29:02.853513  278507 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:29:02.853580  278507 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210816221527-6487
	I0816 22:29:02.911493  278507 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:29:02.911523  278507 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:29:02.911604  278507 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210816221527-6487
	I0816 22:29:02.915995  278507 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:29:02.918559  278507 node_ready.go:35] waiting up to 5m0s for node "auto-20210816221527-6487" to be "Ready" ...
	I0816 22:29:02.923723  278507 node_ready.go:49] node "auto-20210816221527-6487" has status "Ready":"True"
	I0816 22:29:02.923749  278507 node_ready.go:38] duration metric: took 5.163592ms waiting for node "auto-20210816221527-6487" to be "Ready" ...
	I0816 22:29:02.923761  278507 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:29:02.940547  278507 pod_ready.go:78] waiting up to 5m0s for pod "coredns-558bd4d5db-lqdfg" in "kube-system" namespace to be "Ready" ...
	I0816 22:29:02.943979  278507 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32974 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/auto-20210816221527-6487/id_rsa Username:docker}
	I0816 22:29:03.017828  278507 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32974 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/auto-20210816221527-6487/id_rsa Username:docker}
	I0816 22:29:03.141057  278507 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:29:03.144559  278507 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:29:03.519728  278507 start.go:728] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0816 22:29:01.522037  287041 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0816 22:29:01.522302  287041 start.go:160] libmachine.API.Create for "enable-default-cni-20210816221527-6487" (driver="docker")
	I0816 22:29:01.522335  287041 client.go:168] LocalClient.Create starting
	I0816 22:29:01.522430  287041 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem
	I0816 22:29:01.522463  287041 main.go:130] libmachine: Decoding PEM data...
	I0816 22:29:01.522484  287041 main.go:130] libmachine: Parsing certificate...
	I0816 22:29:01.522634  287041 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem
	I0816 22:29:01.522664  287041 main.go:130] libmachine: Decoding PEM data...
	I0816 22:29:01.522683  287041 main.go:130] libmachine: Parsing certificate...
	I0816 22:29:01.523085  287041 cli_runner.go:115] Run: docker network inspect enable-default-cni-20210816221527-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0816 22:29:01.565972  287041 cli_runner.go:162] docker network inspect enable-default-cni-20210816221527-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0816 22:29:01.566049  287041 network_create.go:255] running [docker network inspect enable-default-cni-20210816221527-6487] to gather additional debugging logs...
	I0816 22:29:01.566071  287041 cli_runner.go:115] Run: docker network inspect enable-default-cni-20210816221527-6487
	W0816 22:29:01.604016  287041 cli_runner.go:162] docker network inspect enable-default-cni-20210816221527-6487 returned with exit code 1
	I0816 22:29:01.604052  287041 network_create.go:258] error running [docker network inspect enable-default-cni-20210816221527-6487]: docker network inspect enable-default-cni-20210816221527-6487: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-20210816221527-6487
	I0816 22:29:01.604066  287041 network_create.go:260] output of [docker network inspect enable-default-cni-20210816221527-6487]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-20210816221527-6487
	
	** /stderr **
	I0816 22:29:01.604120  287041 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 22:29:01.650271  287041 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000010048] misses:0}
	I0816 22:29:01.650342  287041 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0816 22:29:01.650369  287041 network_create.go:106] attempt to create docker network enable-default-cni-20210816221527-6487 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0816 22:29:01.650427  287041 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20210816221527-6487
	I0816 22:29:01.730326  287041 network_create.go:90] docker network enable-default-cni-20210816221527-6487 192.168.49.0/24 created
	I0816 22:29:01.730356  287041 kic.go:106] calculated static IP "192.168.49.2" for the "enable-default-cni-20210816221527-6487" container
	I0816 22:29:01.730415  287041 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0816 22:29:01.772493  287041 cli_runner.go:115] Run: docker volume create enable-default-cni-20210816221527-6487 --label name.minikube.sigs.k8s.io=enable-default-cni-20210816221527-6487 --label created_by.minikube.sigs.k8s.io=true
	I0816 22:29:01.813939  287041 oci.go:102] Successfully created a docker volume enable-default-cni-20210816221527-6487
	I0816 22:29:01.814044  287041 cli_runner.go:115] Run: docker run --rm --name enable-default-cni-20210816221527-6487-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20210816221527-6487 --entrypoint /usr/bin/test -v enable-default-cni-20210816221527-6487:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0816 22:29:02.676951  287041 oci.go:106] Successfully prepared a docker volume enable-default-cni-20210816221527-6487
	W0816 22:29:02.677002  287041 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0816 22:29:02.677011  287041 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0816 22:29:02.677027  287041 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0816 22:29:02.677061  287041 kic.go:179] Starting extracting preloaded images to volume ...
	I0816 22:29:02.677072  287041 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0816 22:29:02.677151  287041 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20210816221527-6487:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0816 22:29:02.818655  287041 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-20210816221527-6487 --name enable-default-cni-20210816221527-6487 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20210816221527-6487 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-20210816221527-6487 --network enable-default-cni-20210816221527-6487 --ip 192.168.49.2 --volume enable-default-cni-20210816221527-6487:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0816 22:29:03.526479  287041 cli_runner.go:115] Run: docker container inspect enable-default-cni-20210816221527-6487 --format={{.State.Running}}
	I0816 22:29:03.587060  287041 cli_runner.go:115] Run: docker container inspect enable-default-cni-20210816221527-6487 --format={{.State.Status}}
	I0816 22:29:03.639274  287041 cli_runner.go:115] Run: docker exec enable-default-cni-20210816221527-6487 stat /var/lib/dpkg/alternatives/iptables
	I0816 22:29:03.786973  287041 oci.go:278] the created container "enable-default-cni-20210816221527-6487" has a running status.
	I0816 22:29:03.787014  287041 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/enable-default-cni-20210816221527-6487/id_rsa...
	I0816 22:29:03.986086  287041 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/enable-default-cni-20210816221527-6487/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0816 22:29:04.358534  287041 cli_runner.go:115] Run: docker container inspect enable-default-cni-20210816221527-6487 --format={{.State.Status}}
	I0816 22:29:04.404113  287041 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0816 22:29:04.404138  287041 kic_runner.go:115] Args: [docker exec --privileged enable-default-cni-20210816221527-6487 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0816 22:29:03.382302  280208 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0816 22:29:03.382329  280208 addons.go:344] enableAddons completed in 989.062223ms
	I0816 22:29:04.557240  280208 pod_ready.go:102] pod "coredns-558bd4d5db-gkxqs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:03.757088  278507 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0816 22:29:03.757127  278507 addons.go:344] enableAddons completed in 1.014730525s
	I0816 22:29:04.985773  278507 pod_ready.go:102] pod "coredns-558bd4d5db-lqdfg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:07.003267  278507 pod_ready.go:102] pod "coredns-558bd4d5db-lqdfg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:06.734224  287041 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20210816221527-6487:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.057025876s)
	I0816 22:29:06.734258  287041 kic.go:188] duration metric: took 4.057197 seconds to extract preloaded images to volume
	I0816 22:29:06.734321  287041 cli_runner.go:115] Run: docker container inspect enable-default-cni-20210816221527-6487 --format={{.State.Status}}
	I0816 22:29:06.775191  287041 machine.go:88] provisioning docker machine ...
	I0816 22:29:06.775232  287041 ubuntu.go:169] provisioning hostname "enable-default-cni-20210816221527-6487"
	I0816 22:29:06.775330  287041 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210816221527-6487
	I0816 22:29:06.814158  287041 main.go:130] libmachine: Using SSH client type: native
	I0816 22:29:06.814327  287041 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32984 <nil> <nil>}
	I0816 22:29:06.814345  287041 main.go:130] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-20210816221527-6487 && echo "enable-default-cni-20210816221527-6487" | sudo tee /etc/hostname
	I0816 22:29:06.947664  287041 main.go:130] libmachine: SSH cmd err, output: <nil>: enable-default-cni-20210816221527-6487
	
	I0816 22:29:06.947744  287041 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210816221527-6487
	I0816 22:29:06.991268  287041 main.go:130] libmachine: Using SSH client type: native
	I0816 22:29:06.991415  287041 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32984 <nil> <nil>}
	I0816 22:29:06.991436  287041 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-20210816221527-6487' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-20210816221527-6487/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-20210816221527-6487' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:29:07.115326  287041 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:29:07.115360  287041 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem ServerCertRemotePath:/e
tc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:29:07.115387  287041 ubuntu.go:177] setting up certificates
	I0816 22:29:07.115413  287041 provision.go:83] configureAuth start
	I0816 22:29:07.115469  287041 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-20210816221527-6487
	I0816 22:29:07.156704  287041 provision.go:138] copyHostCerts
	I0816 22:29:07.156762  287041 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:29:07.156772  287041 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:29:07.156823  287041 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:29:07.156892  287041 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:29:07.156903  287041 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:29:07.156925  287041 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:29:07.156972  287041 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:29:07.156979  287041 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:29:07.156995  287041 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0816 22:29:07.157046  287041 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-20210816221527-6487 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube enable-default-cni-20210816221527-6487]
	I0816 22:29:07.359663  287041 provision.go:172] copyRemoteCerts
	I0816 22:29:07.359720  287041 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:29:07.359764  287041 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210816221527-6487
	I0816 22:29:07.400845  287041 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/enable-default-cni-20210816221527-6487/id_rsa Username:docker}
	I0816 22:29:07.491679  287041 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:29:07.507602  287041 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1285 bytes)
	I0816 22:29:07.522891  287041 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 22:29:07.538118  287041 provision.go:86] duration metric: configureAuth took 422.691864ms
	I0816 22:29:07.538140  287041 ubuntu.go:193] setting minikube options for container-runtime
	I0816 22:29:07.538268  287041 config.go:177] Loaded profile config "enable-default-cni-20210816221527-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 22:29:07.538369  287041 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210816221527-6487
	I0816 22:29:07.579820  287041 main.go:130] libmachine: Using SSH client type: native
	I0816 22:29:07.580005  287041 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32984 <nil> <nil>}
	I0816 22:29:07.580027  287041 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 22:29:07.948839  287041 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 22:29:07.948876  287041 machine.go:91] provisioned docker machine in 1.173662601s
	I0816 22:29:07.948888  287041 client.go:171] LocalClient.Create took 6.426532089s
	I0816 22:29:07.948900  287041 start.go:168] duration metric: libmachine.API.Create for "enable-default-cni-20210816221527-6487" took 6.426603422s
	I0816 22:29:07.948909  287041 start.go:267] post-start starting for "enable-default-cni-20210816221527-6487" (driver="docker")
	I0816 22:29:07.948919  287041 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:29:07.949005  287041 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:29:07.949073  287041 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210816221527-6487
	I0816 22:29:07.990570  287041 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/enable-default-cni-20210816221527-6487/id_rsa Username:docker}
	I0816 22:29:08.079230  287041 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:29:08.081821  287041 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0816 22:29:08.081843  287041 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0816 22:29:08.081851  287041 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0816 22:29:08.081856  287041 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0816 22:29:08.081871  287041 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:29:08.081913  287041 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:29:08.081995  287041 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem -> 64872.pem in /etc/ssl/certs
	I0816 22:29:08.082085  287041 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:29:08.088376  287041 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:29:08.103781  287041 start.go:270] post-start completed in 154.856777ms
	I0816 22:29:08.104121  287041 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-20210816221527-6487
	I0816 22:29:08.144086  287041 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/config.json ...
	I0816 22:29:08.144335  287041 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 22:29:08.144383  287041 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210816221527-6487
	I0816 22:29:08.184847  287041 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/enable-default-cni-20210816221527-6487/id_rsa Username:docker}
	I0816 22:29:08.271638  287041 start.go:129] duration metric: createHost completed in 6.751805756s
	I0816 22:29:08.271665  287041 start.go:80] releasing machines lock for "enable-default-cni-20210816221527-6487", held for 6.75192288s
	I0816 22:29:08.271749  287041 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-20210816221527-6487
	I0816 22:29:08.311792  287041 ssh_runner.go:149] Run: systemctl --version
	I0816 22:29:08.311844  287041 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210816221527-6487
	I0816 22:29:08.311860  287041 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:29:08.311939  287041 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210816221527-6487
	I0816 22:29:08.357194  287041 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/enable-default-cni-20210816221527-6487/id_rsa Username:docker}
	I0816 22:29:08.358254  287041 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/enable-default-cni-20210816221527-6487/id_rsa Username:docker}
	I0816 22:29:08.476481  287041 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0816 22:29:08.494584  287041 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0816 22:29:08.502785  287041 docker.go:153] disabling docker service ...
	I0816 22:29:08.502822  287041 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:29:08.511798  287041 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:29:08.520496  287041 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:29:08.586786  287041 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:29:08.654110  287041 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:29:08.662644  287041 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:29:08.674155  287041 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0816 22:29:08.681389  287041 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:29:08.687169  287041 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:29:08.687220  287041 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:29:08.693699  287041 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:29:08.699557  287041 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:29:08.756179  287041 ssh_runner.go:149] Run: sudo systemctl start crio
	I0816 22:29:08.764791  287041 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 22:29:08.764838  287041 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:29:08.767745  287041 start.go:413] Will wait 60s for crictl version
	I0816 22:29:08.767786  287041 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:29:08.794263  287041 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0816 22:29:08.794324  287041 ssh_runner.go:149] Run: crio --version
	I0816 22:29:08.850590  287041 ssh_runner.go:149] Run: crio --version
	I0816 22:29:08.912869  287041 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	I0816 22:29:08.912942  287041 cli_runner.go:115] Run: docker network inspect enable-default-cni-20210816221527-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 22:29:08.955570  287041 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0816 22:29:08.959326  287041 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:29:08.968419  287041 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0816 22:29:08.968480  287041 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:29:09.011377  287041 crio.go:424] all images are preloaded for cri-o runtime.
	I0816 22:29:09.011399  287041 crio.go:333] Images already preloaded, skipping extraction
	I0816 22:29:09.011446  287041 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:29:09.034810  287041 crio.go:424] all images are preloaded for cri-o runtime.
	I0816 22:29:09.034829  287041 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:29:09.034895  287041 ssh_runner.go:149] Run: crio config
	I0816 22:29:09.101341  287041 cni.go:93] Creating CNI manager for "bridge"
	I0816 22:29:09.101363  287041 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0816 22:29:09.101374  287041 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-20210816221527-6487 NodeName:enable-default-cni-20210816221527-6487 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd Cl
ientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:29:09.101486  287041 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "enable-default-cni-20210816221527-6487"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:29:09.101594  287041 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=enable-default-cni-20210816221527-6487 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:enable-default-cni-20210816221527-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:}
	I0816 22:29:09.101672  287041 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0816 22:29:09.108520  287041 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:29:09.108584  287041 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:29:09.114954  287041 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (569 bytes)
	I0816 22:29:09.126940  287041 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 22:29:09.138998  287041 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2079 bytes)
	I0816 22:29:09.150578  287041 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0816 22:29:09.153307  287041 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:29:09.161690  287041 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487 for IP: 192.168.49.2
	I0816 22:29:09.161741  287041 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:29:09.161762  287041 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:29:09.161818  287041 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/client.key
	I0816 22:29:09.161833  287041 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/client.crt with IP's: []
	I0816 22:29:09.449885  287041 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/client.crt ...
	I0816 22:29:09.449919  287041 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/client.crt: {Name:mk787487aff5e89283c8237ab26c20ab89fb98cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:29:09.450100  287041 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/client.key ...
	I0816 22:29:09.450115  287041 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/client.key: {Name:mk0024176c9cc5bca546e7fd653ef097eec1e9f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:29:09.450202  287041 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/apiserver.key.dd3b5fb2
	I0816 22:29:09.450211  287041 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0816 22:29:09.609347  287041 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/apiserver.crt.dd3b5fb2 ...
	I0816 22:29:09.609376  287041 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/apiserver.crt.dd3b5fb2: {Name:mka21856498bddf5b84651feac7da58004ff5027 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:29:09.609543  287041 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/apiserver.key.dd3b5fb2 ...
	I0816 22:29:09.609559  287041 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/apiserver.key.dd3b5fb2: {Name:mk6533a98a19c316486022d7c38e7254d06a9017 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:29:09.609641  287041 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/apiserver.crt
	I0816 22:29:09.609696  287041 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/apiserver.key
	I0816 22:29:09.609744  287041 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/proxy-client.key
	I0816 22:29:09.609753  287041 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/proxy-client.crt with IP's: []
	I0816 22:29:09.914264  287041 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/proxy-client.crt ...
	I0816 22:29:09.914298  287041 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/proxy-client.crt: {Name:mk48e6ae98d60f94f52b7619bfb5c2c07c65c4ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:29:09.914475  287041 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/proxy-client.key ...
	I0816 22:29:09.914487  287041 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/proxy-client.key: {Name:mked0bd6193e1ab4657b0eb9a2fbeb1be39a51e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:29:09.914646  287041 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem (1338 bytes)
	W0816 22:29:09.914680  287041 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487_empty.pem, impossibly tiny 0 bytes
	I0816 22:29:09.914690  287041 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 22:29:09.914715  287041 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:29:09.914739  287041 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:29:09.914764  287041 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0816 22:29:09.914814  287041 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:29:09.915685  287041 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:29:09.934308  287041 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 22:29:10.028545  287041 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:29:10.044046  287041 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 22:29:10.060095  287041 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:29:10.076045  287041 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 22:29:10.092462  287041 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:29:10.109485  287041 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 22:29:10.126092  287041 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /usr/share/ca-certificates/64872.pem (1708 bytes)
	I0816 22:29:10.142946  287041 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:29:10.161311  287041 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem --> /usr/share/ca-certificates/6487.pem (1338 bytes)
	I0816 22:29:10.177140  287041 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:29:10.189371  287041 ssh_runner.go:149] Run: openssl version
	I0816 22:29:10.194096  287041 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:29:10.201394  287041 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:29:10.204217  287041 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:29:10.204268  287041 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:29:10.208773  287041 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:29:10.215971  287041 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6487.pem && ln -fs /usr/share/ca-certificates/6487.pem /etc/ssl/certs/6487.pem"
	I0816 22:29:10.223388  287041 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6487.pem
	I0816 22:29:10.226167  287041 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:50 /usr/share/ca-certificates/6487.pem
	I0816 22:29:10.226211  287041 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6487.pem
	I0816 22:29:10.230987  287041 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6487.pem /etc/ssl/certs/51391683.0"
	I0816 22:29:10.238153  287041 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/64872.pem && ln -fs /usr/share/ca-certificates/64872.pem /etc/ssl/certs/64872.pem"
	I0816 22:29:10.245703  287041 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/64872.pem
	I0816 22:29:10.248452  287041 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:50 /usr/share/ca-certificates/64872.pem
	I0816 22:29:10.248490  287041 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/64872.pem
	I0816 22:29:10.252867  287041 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/64872.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:29:10.259517  287041 kubeadm.go:390] StartCluster: {Name:enable-default-cni-20210816221527-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:enable-default-cni-20210816221527-6487 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:29:10.259598  287041 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 22:29:10.259638  287041 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:29:10.282086  287041 cri.go:76] found id: ""
	I0816 22:29:10.282139  287041 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:29:10.288490  287041 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:29:10.294755  287041 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0816 22:29:10.294799  287041 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:29:10.300722  287041 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:29:10.300766  287041 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0816 22:29:10.609613  287041 out.go:204]   - Generating certificates and keys ...
	I0816 22:29:07.058281  280208 pod_ready.go:102] pod "coredns-558bd4d5db-gkxqs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:09.556570  280208 pod_ready.go:102] pod "coredns-558bd4d5db-gkxqs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:09.486435  278507 pod_ready.go:102] pod "coredns-558bd4d5db-lqdfg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:11.985342  278507 pod_ready.go:102] pod "coredns-558bd4d5db-lqdfg" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Mon 2021-08-16 22:17:44 UTC, end at Mon 2021-08-16 22:29:12 UTC. --
	Aug 16 22:27:12 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:27:12.544160061Z" level=info msg="Created container 29cb6abc91eccce9c1ed3060df0bcf1712166b848ba6112f56dfa0a6c8b150a0: kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544-p56jc/dashboard-metrics-scraper" id=df17ca10-e14f-4270-9be8-3975eceb9917 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:27:12 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:27:12.544686200Z" level=info msg="Starting container: 29cb6abc91eccce9c1ed3060df0bcf1712166b848ba6112f56dfa0a6c8b150a0" id=d1f19a0d-33a3-4c6a-b2ee-15791116f8c5 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:27:12 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:27:12.567463047Z" level=info msg="Started container 29cb6abc91eccce9c1ed3060df0bcf1712166b848ba6112f56dfa0a6c8b150a0: kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544-p56jc/dashboard-metrics-scraper" id=d1f19a0d-33a3-4c6a-b2ee-15791116f8c5 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:27:12 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:27:12.883456974Z" level=info msg="Removing container: e231f4ecb120e2248b78aaf7c05cd9130cf1461fb487af9bacb3bc406eebdc4d" id=20d711e9-7a38-4a06-9f76-2db82f05e3eb name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 16 22:27:12 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:27:12.916469589Z" level=info msg="Removed container e231f4ecb120e2248b78aaf7c05cd9130cf1461fb487af9bacb3bc406eebdc4d: kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544-p56jc/dashboard-metrics-scraper" id=20d711e9-7a38-4a06-9f76-2db82f05e3eb name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 16 22:27:21 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:27:21.379892700Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=d12524f4-c44d-453c-9dfa-0a78bd09861a name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:27:21 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:27:21.380232669Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=d12524f4-c44d-453c-9dfa-0a78bd09861a name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:27:36 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:27:36.379503105Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=e92d982f-6662-48d8-84e5-dc2b89863576 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:27:36 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:27:36.379800289Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=e92d982f-6662-48d8-84e5-dc2b89863576 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:27:51 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:27:51.379848039Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=df8199cc-8485-4311-8fc7-c3ba81d3bfe8 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:27:51 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:27:51.380134943Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=df8199cc-8485-4311-8fc7-c3ba81d3bfe8 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:28:01 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:28:01.384822714Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.1" id=89881e64-9cb1-4d6e-812a-db638305cb11 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:28:01 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:28:01.385540332Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e,RepoTags:[k8s.gcr.io/pause:3.1],RepoDigests:[k8s.gcr.io/pause@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea],Size_:748776,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=89881e64-9cb1-4d6e-812a-db638305cb11 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:28:05 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:28:05.379606364Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=8e62c339-85b2-4dbe-8efd-c34a5a50036a name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:28:05 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:28:05.379880035Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=8e62c339-85b2-4dbe-8efd-c34a5a50036a name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:28:17 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:28:17.379536429Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=f3cbe963-184a-4e9c-8ae6-e46198c7eb81 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:28:17 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:28:17.379832866Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=f3cbe963-184a-4e9c-8ae6-e46198c7eb81 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:28:28 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:28:28.379581420Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=65090874-5c43-411d-ba16-d232e258ecdb name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:28:28 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:28:28.379792470Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=65090874-5c43-411d-ba16-d232e258ecdb name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:28:40 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:28:40.379761150Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=caa2ed8f-6cac-491b-9f12-1a95facf31f3 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:28:40 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:28:40.380055032Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=caa2ed8f-6cac-491b-9f12-1a95facf31f3 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:28:52 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:28:52.379504736Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=7eb3ace6-d9f8-4da5-b9ab-4168d304ce45 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:28:52 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:28:52.379710341Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=7eb3ace6-d9f8-4da5-b9ab-4168d304ce45 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:29:06 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:29:06.379679984Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=47a43c89-6b29-42de-96bd-f2810315ec63 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:29:06 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:29:06.379970936Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=47a43c89-6b29-42de-96bd-f2810315ec63 name=/runtime.v1alpha2.ImageService/ImageStatus
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID
	29cb6abc91ecc       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   2 minutes ago       Exited              dashboard-metrics-scraper   5                   a76d338eee6de
	10ed0c559670b       eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c   5 minutes ago       Running             coredns                     0                   a75c25d2d9fd6
	9824eba2c3288       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   5 minutes ago       Running             storage-provisioner         0                   2aeef213ad07d
	fc1a6c3255410       9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db   5 minutes ago       Running             kubernetes-dashboard        0                   fb7f0589487bb
	ee5a79b4037bd       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb   5 minutes ago       Running             kindnet-cni                 0                   268365ea989a7
	573ba7ae7e940       5cd54e388abafbc4e1feb1050d139d718e5544494ffa55118141d6cbe4681e9d   5 minutes ago       Running             kube-proxy                  0                   3ee8752e7a891
	68efe63d2b18a       2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d   6 minutes ago       Running             etcd                        0                   545c9d2ab1fb0
	39eab1fff2a03       b95b1efa0436be0942d09e035a099542787d0a32d23cda704bd3e84760d3d150   6 minutes ago       Running             kube-controller-manager     0                   2e33cbc445de2
	5d9a6699a0827       00638a24688b0ccaebac56206e4b7e6c529cb6807e1c30700e6f3489b59a4492   6 minutes ago       Running             kube-scheduler              0                   99077c4379571
	1646719043afc       ecf910f40d6e04e02f9da936745fdfdb455122df78e0ec3dc13c7a2eaa5191e6   6 minutes ago       Running             kube-apiserver              0                   3e22a704d7145
	
	* 
	* ==> coredns [10ed0c559670bc837ba359f0311f63a6421f80088a63de7509a9ec51ec991904] <==
	* .:53
	2021-08-16T22:24:12.962Z [INFO] CoreDNS-1.3.1
	2021-08-16T22:24:12.962Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	CoreDNS-1.3.1
	linux/amd64, go1.11.4, 6b56a9c
	2021-08-16T22:24:12.962Z [INFO] plugin/reload: Running configuration MD5 = 84554e3bcd896bd44d28b54cbac27490
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: ff ff ff ff ff ff 3e 2a 76 0d a5 db 08 06        ......>*v.....
	[  +0.632198] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 8e 81 94 18 1c 35 08 06        ...........5..
	[  +0.000004] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000000] ll header: 00000000: ff ff ff ff ff ff 8e 81 94 18 1c 35 08 06        ...........5..
	[ +21.310518] IPv4: martian source 10.85.0.3 from 10.85.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 7e 5e 37 3e 10 aa 08 06        ......~^7>....
	[  +2.681397] IPv4: martian source 10.85.0.4 from 10.85.0.4, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 52 d5 8b 84 4c a6 08 06        ......R...L...
	[  +5.603962] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff fa eb a1 ec 39 af 08 06        ..........9...
	[  +0.000004] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev eth0
	[  +0.000000] ll header: 00000000: ff ff ff ff ff ff fa eb a1 ec 39 af 08 06        ..........9...
	[  +0.004280] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff be 23 46 8c 4d d6 08 06        .......#F.M...
	[  +0.403009] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff fa eb a1 ec 39 af 08 06        ..........9...
	[  +0.026417] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff be 23 46 8c 4d d6 08 06        .......#F.M...
	[  +5.266307] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth304ddcac
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 12 2f b9 81 b0 0d 08 06        ......./......
	[  +2.687875] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethb4a2a423
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff b2 2b 41 97 91 1e 08 06        .......+A.....
	[  +0.983719] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth194e2de4
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 66 49 8f a1 9e ad 08 06        ......fI......
	
	* 
	* ==> etcd [68efe63d2b18a4657b5d62078100ef1b193a339d0b486472b1c85f1d4189e4ff] <==
	* 2021-08-16 22:25:16.692067 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (1.062929436s) to execute
	2021-08-16 22:25:16.692145 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-8546d8b77b-pb7tf\" " with result "range_response_count:1 size:1956" took too long (1.311614746s) to execute
	2021-08-16 22:25:16.692185 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/metrics-server-8546d8b77b-pb7tf.169be9b4f48c19b7\" " with result "range_response_count:1 size:511" took too long (1.309904831s) to execute
	2021-08-16 22:25:17.338805 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-20210816221528-6487\" " with result "range_response_count:1 size:5050" took too long (643.942509ms) to execute
	2021-08-16 22:25:17.348517 W | etcdserver: read-only range request "key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (653.599843ms) to execute
	2021-08-16 22:28:52.039410 W | etcdserver: request "header:<ID:3238505195140492486 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.58.2\" mod_revision:796 > success:<request_put:<key:\"/registry/masterleases/192.168.58.2\" value_size:67 lease:3238505195140492484 >> failure:<request_range:<key:\"/registry/masterleases/192.168.58.2\" > >>" with result "size:16" took too long (1.687475809s) to execute
	2021-08-16 22:28:52.039530 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:5050" took too long (2.374963136s) to execute
	2021-08-16 22:28:52.188160 W | wal: sync duration of 1.836345842s, expected less than 1s
	2021-08-16 22:28:52.309878 W | etcdserver: read-only range request "key:\"/registry/volumeattachments\" range_end:\"/registry/volumeattachmentt\" count_only:true " with result "range_response_count:0 size:5" took too long (2.198421725s) to execute
	2021-08-16 22:28:52.309919 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:766" took too long (2.360089157s) to execute
	2021-08-16 22:28:52.310055 W | etcdserver: read-only range request "key:\"/registry/replicasets\" range_end:\"/registry/replicasett\" count_only:true " with result "range_response_count:0 size:7" took too long (1.288274379s) to execute
	2021-08-16 22:28:52.310147 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:215" took too long (269.126412ms) to execute
	2021-08-16 22:28:52.310926 W | etcdserver: read-only range request "key:\"/registry/storageclasses\" range_end:\"/registry/storageclasset\" count_only:true " with result "range_response_count:0 size:7" took too long (475.426002ms) to execute
	2021-08-16 22:28:57.446583 W | wal: sync duration of 2.412274702s, expected less than 1s
	2021-08-16 22:28:57.937853 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:766" took too long (1.615747157s) to execute
	2021-08-16 22:28:57.937890 W | etcdserver: read-only range request "key:\"/registry/deployments\" range_end:\"/registry/deploymentt\" count_only:true " with result "range_response_count:0 size:7" took too long (2.361401369s) to execute
	2021-08-16 22:28:57.937944 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:9 size:20520" took too long (1.742253642s) to execute
	2021-08-16 22:28:57.938083 W | etcdserver: read-only range request "key:\"foo\" " with result "range_response_count:0 size:5" took too long (2.512479783s) to execute
	2021-08-16 22:28:57.938092 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.800163452s) to execute
	2021-08-16 22:28:57.938201 W | etcdserver: read-only range request "key:\"/registry/podsecuritypolicy\" range_end:\"/registry/podsecuritypolicz\" count_only:true " with result "range_response_count:0 size:5" took too long (1.559935972s) to execute
	2021-08-16 22:28:57.938286 W | etcdserver: read-only range request "key:\"/registry/daemonsets\" range_end:\"/registry/daemonsett\" count_only:true " with result "range_response_count:0 size:7" took too long (495.308538ms) to execute
	2021-08-16 22:28:57.938299 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/old-k8s-version-20210816221528-6487\" " with result "range_response_count:1 size:396" took too long (498.586634ms) to execute
	2021-08-16 22:28:57.938419 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (496.635016ms) to execute
	2021-08-16 22:28:57.938440 W | etcdserver: read-only range request "key:\"/registry/certificatesigningrequests\" range_end:\"/registry/certificatesigningrequestt\" count_only:true " with result "range_response_count:0 size:7" took too long (195.783788ms) to execute
	2021-08-16 22:28:59.163254 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings\" range_end:\"/registry/clusterrolebindingt\" count_only:true " with result "range_response_count:0 size:7" took too long (683.488234ms) to execute
	
	* 
	* ==> kernel <==
	*  22:29:53 up  1:09,  0 users,  load average: 2.79, 2.62, 2.36
	Linux old-k8s-version-20210816221528-6487 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [1646719043afc023ec9a9c6e546a9e5e1fa4a04854ab10ce7530b9bbe1c06030] <==
	* I0816 22:29:02.091739       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0816 22:29:02.091877       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0816 22:29:03.092046       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0816 22:29:03.092166       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0816 22:29:04.093196       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0816 22:29:04.093319       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0816 22:29:05.093489       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0816 22:29:05.093601       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0816 22:29:06.093800       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0816 22:29:06.093933       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0816 22:29:07.094099       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0816 22:29:07.094202       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0816 22:29:07.232773       1 controller.go:102] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0816 22:29:07.232851       1 handler_proxy.go:89] no RequestInfo found in the context
	E0816 22:29:07.232931       1 controller.go:108] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0816 22:29:07.232946       1 controller.go:121] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 22:29:08.094359       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0816 22:29:08.094471       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0816 22:29:09.094647       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0816 22:29:09.094756       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0816 22:29:10.094915       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0816 22:29:10.095030       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0816 22:29:11.095236       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0816 22:29:11.095365       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	
	* 
	* ==> kube-controller-manager [39eab1fff2a03f36068c707a1a5ae682543a0f87a9d27daeb773edb072c84571] <==
	* I0816 22:23:29.614680       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"9527ed20-fee0-11eb-938e-0242c0a83a02", APIVersion:"apps/v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-5b494cc544-p56jc
	I0816 22:23:30.035943       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-8546d8b77b", UID:"94e8fa5e-fee0-11eb-938e-0242c0a83a02", APIVersion:"apps/v1", ResourceVersion:"372", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-8546d8b77b-pb7tf
	I0816 22:23:30.534019       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"9531ed2c-fee0-11eb-938e-0242c0a83a02", APIVersion:"apps/v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-5d8978d65d-kvm5k
	E0816 22:23:57.137720       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:23:59.697881       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	I0816 22:24:07.712109       1 node_lifecycle_controller.go:1036] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	E0816 22:24:27.389213       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:24:31.699611       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 22:24:57.640749       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:25:03.700952       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 22:25:27.892003       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:25:35.702252       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 22:25:58.143378       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:26:07.703679       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 22:26:28.394912       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:26:39.704940       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 22:26:58.646055       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:27:11.706228       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 22:27:28.897456       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:27:43.707468       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 22:27:59.148732       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:28:15.709156       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 22:28:29.400057       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:28:47.711036       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 22:28:59.651739       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [573ba7ae7e9400419eedaf1a8a703ea83fd88f346bc0926601b8ced182e07bed] <==
	* W0816 22:23:29.142809       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
	I0816 22:23:29.225256       1 server_others.go:148] Using iptables Proxier.
	I0816 22:23:29.228234       1 server_others.go:178] Tearing down inactive rules.
	E0816 22:23:30.330850       1 proxier.go:583] Error removing iptables rules in ipvs proxier: error deleting chain "KUBE-MARK-MASQ": exit status 1: iptables: Too many links.
	I0816 22:23:30.724242       1 server.go:555] Version: v1.14.0
	I0816 22:23:30.728326       1 config.go:202] Starting service config controller
	I0816 22:23:30.728352       1 config.go:102] Starting endpoints config controller
	I0816 22:23:30.728372       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
	I0816 22:23:30.728352       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
	I0816 22:23:30.828526       1 controller_utils.go:1034] Caches are synced for endpoints config controller
	I0816 22:23:30.828674       1 controller_utils.go:1034] Caches are synced for service config controller
	
	* 
	* ==> kube-scheduler [5d9a6699a082709279178dd0fcfe86839cc48019194dd5952cc13c71fe9474db] <==
	* W0816 22:23:04.336333       1 authentication.go:55] Authentication is disabled
	I0816 22:23:04.336348       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
	I0816 22:23:04.336686       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
	E0816 22:23:06.199512       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:23:06.223725       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:06.223881       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:23:06.232049       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:23:06.232284       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:23:06.232338       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:23:06.232518       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:23:06.233704       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:23:06.233755       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:23:06.238769       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:23:07.200609       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:23:07.224766       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:07.227652       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:23:07.232982       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:23:07.234073       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:23:07.235099       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:23:07.236257       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:23:07.237225       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:23:07.238359       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:23:07.239562       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0816 22:23:09.037938       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
	I0816 22:23:09.138108       1 controller_utils.go:1034] Caches are synced for scheduler controller
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:17:44 UTC, end at Mon 2021-08-16 22:29:53 UTC. --
	Aug 16 22:27:09 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:27:09.394942    5012 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host
	Aug 16 22:27:09 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:27:09.394978    5012 pod_workers.go:190] Error syncing pod 95908c73-fee0-11eb-938e-0242c0a83a02 ("metrics-server-8546d8b77b-pb7tf_kube-system(95908c73-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host"
	Aug 16 22:27:12 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:27:12.883437    5012 pod_workers.go:190] Error syncing pod 9547f199-fee0-11eb-938e-0242c0a83a02 ("dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"
	Aug 16 22:27:21 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:27:21.380456    5012 pod_workers.go:190] Error syncing pod 95908c73-fee0-11eb-938e-0242c0a83a02 ("metrics-server-8546d8b77b-pb7tf_kube-system(95908c73-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:27:22 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:27:22.420359    5012 pod_workers.go:190] Error syncing pod 9547f199-fee0-11eb-938e-0242c0a83a02 ("dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"
	Aug 16 22:27:32 old-k8s-version-20210816221528-6487 kubelet[5012]: W0816 22:27:32.050898    5012 conversion.go:110] Could not get instant cpu stats: cumulative stats decrease
	Aug 16 22:27:36 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:27:36.380124    5012 pod_workers.go:190] Error syncing pod 95908c73-fee0-11eb-938e-0242c0a83a02 ("metrics-server-8546d8b77b-pb7tf_kube-system(95908c73-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:27:37 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:27:37.379327    5012 pod_workers.go:190] Error syncing pod 9547f199-fee0-11eb-938e-0242c0a83a02 ("dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"
	Aug 16 22:27:49 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:27:49.379455    5012 pod_workers.go:190] Error syncing pod 9547f199-fee0-11eb-938e-0242c0a83a02 ("dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"
	Aug 16 22:27:51 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:27:51.380376    5012 pod_workers.go:190] Error syncing pod 95908c73-fee0-11eb-938e-0242c0a83a02 ("metrics-server-8546d8b77b-pb7tf_kube-system(95908c73-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:28:02 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:28:02.379305    5012 pod_workers.go:190] Error syncing pod 9547f199-fee0-11eb-938e-0242c0a83a02 ("dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"
	Aug 16 22:28:05 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:28:05.380126    5012 pod_workers.go:190] Error syncing pod 95908c73-fee0-11eb-938e-0242c0a83a02 ("metrics-server-8546d8b77b-pb7tf_kube-system(95908c73-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:28:16 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:28:16.379413    5012 pod_workers.go:190] Error syncing pod 9547f199-fee0-11eb-938e-0242c0a83a02 ("dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"
	Aug 16 22:28:17 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:28:17.380084    5012 pod_workers.go:190] Error syncing pod 95908c73-fee0-11eb-938e-0242c0a83a02 ("metrics-server-8546d8b77b-pb7tf_kube-system(95908c73-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:28:27 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:28:27.379418    5012 pod_workers.go:190] Error syncing pod 9547f199-fee0-11eb-938e-0242c0a83a02 ("dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"
	Aug 16 22:28:28 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:28:28.380054    5012 pod_workers.go:190] Error syncing pod 95908c73-fee0-11eb-938e-0242c0a83a02 ("metrics-server-8546d8b77b-pb7tf_kube-system(95908c73-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:28:40 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:28:40.379442    5012 pod_workers.go:190] Error syncing pod 9547f199-fee0-11eb-938e-0242c0a83a02 ("dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"
	Aug 16 22:28:40 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:28:40.380292    5012 pod_workers.go:190] Error syncing pod 95908c73-fee0-11eb-938e-0242c0a83a02 ("metrics-server-8546d8b77b-pb7tf_kube-system(95908c73-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:28:52 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:28:52.379988    5012 pod_workers.go:190] Error syncing pod 95908c73-fee0-11eb-938e-0242c0a83a02 ("metrics-server-8546d8b77b-pb7tf_kube-system(95908c73-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:28:53 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:28:53.379421    5012 pod_workers.go:190] Error syncing pod 9547f199-fee0-11eb-938e-0242c0a83a02 ("dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"
	Aug 16 22:29:06 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:29:06.379483    5012 pod_workers.go:190] Error syncing pod 9547f199-fee0-11eb-938e-0242c0a83a02 ("dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"
	Aug 16 22:29:06 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:29:06.380250    5012 pod_workers.go:190] Error syncing pod 95908c73-fee0-11eb-938e-0242c0a83a02 ("metrics-server-8546d8b77b-pb7tf_kube-system(95908c73-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:29:10 old-k8s-version-20210816221528-6487 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 16 22:29:10 old-k8s-version-20210816221528-6487 systemd[1]: kubelet.service: Succeeded.
	Aug 16 22:29:10 old-k8s-version-20210816221528-6487 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [fc1a6c3255410ca13e0379073ba0e17180576d92a0ea5b02a71aa3563c7f8f18] <==
	* 2021/08/16 22:24:07 Using namespace: kubernetes-dashboard
	2021/08/16 22:24:07 Using in-cluster config to connect to apiserver
	2021/08/16 22:24:07 Using secret token for csrf signing
	2021/08/16 22:24:07 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/16 22:24:07 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/16 22:24:07 Successful initial request to the apiserver, version: v1.14.0
	2021/08/16 22:24:07 Generating JWE encryption key
	2021/08/16 22:24:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/16 22:24:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/16 22:24:07 Initializing JWE encryption key from synchronized object
	2021/08/16 22:24:07 Creating in-cluster Sidecar client
	2021/08/16 22:24:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:24:07 Serving insecurely on HTTP port: 9090
	2021/08/16 22:24:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:25:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:25:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:26:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:26:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:27:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:27:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:28:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:28:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:29:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:29:52 Metric client health check failed: Get "https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/services/dashboard-metrics-scraper/proxy/healthz": http2: client connection lost. Retrying in 30 seconds.
	2021/08/16 22:24:07 Starting overwatch
	
	* 
	* ==> storage-provisioner [9824eba2c3288da7218f49c3b45afa0fc7d2164956ff5f942d0295c3756a728c] <==
	* 	/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	goroutine 114 [sync.Cond.Wait, 5 minutes]:
	sync.runtime_notifyListWait(0xc00032a2d0, 0x2)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc00032a2c0)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc0003722a0, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextClaimWorkItem(0xc0004acf00, 0x18e5530, 0xc00004a180, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:935 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runClaimWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:924
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.2()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0005e2500)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0005e2500, 0x18b3d60, 0xc0005e0b40, 0x1, 0xc000440300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0005e2500, 0x3b9aca00, 0x0, 0x1, 0xc000440300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc0005e2500, 0x3b9aca00, 0xc000440300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:29:53.138339  290704 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20210816221528-6487
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20210816221528-6487:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2c6179f59cfd0da59dc0d48131d8dac1fcbea61bd9724d9e65f6ee26edcd7d4c",
	        "Created": "2021-08-16T22:15:30.360296281Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 214144,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-16T22:17:44.551502469Z",
	            "FinishedAt": "2021-08-16T22:17:43.007475238Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/2c6179f59cfd0da59dc0d48131d8dac1fcbea61bd9724d9e65f6ee26edcd7d4c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2c6179f59cfd0da59dc0d48131d8dac1fcbea61bd9724d9e65f6ee26edcd7d4c/hostname",
	        "HostsPath": "/var/lib/docker/containers/2c6179f59cfd0da59dc0d48131d8dac1fcbea61bd9724d9e65f6ee26edcd7d4c/hosts",
	        "LogPath": "/var/lib/docker/containers/2c6179f59cfd0da59dc0d48131d8dac1fcbea61bd9724d9e65f6ee26edcd7d4c/2c6179f59cfd0da59dc0d48131d8dac1fcbea61bd9724d9e65f6ee26edcd7d4c-json.log",
	        "Name": "/old-k8s-version-20210816221528-6487",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20210816221528-6487:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20210816221528-6487",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7f675404c0f3aa5467be8c3ed12f9732feb4ba9a8414937f296fe8966821c42e-init/diff:/var/lib/docker/overlay2/26ccb05b45c39eb8fb9a222c35182ae0b2281818311893cc1c820d93f21711ae/diff:/var/lib/docker/overlay2/cd717b1332cc33945d2b9d777346faa34d6bdfa47023daf36a4373532eccf421/diff:/var/lib/docker/overlay2/d9b45e6cc99a52e8f81dfe312587e9103ceac6c138634c90ec0d647cd90dcdc6/diff:/var/lib/docker/overlay2/253a269adcaa7871d92fb351ceb4d32421104cd62f741ba1c63d8e3f2e19a0b5/diff:/var/lib/docker/overlay2/42adedc2a706910fdba591e62cbfab57dd03d85b882f6eef38c3e131a0c52bea/diff:/var/lib/docker/overlay2/77ed2008192a16e91da5c6b9ccc6e71cea338a7cb24d331dc695e5f74fb04573/diff:/var/lib/docker/overlay2/29dccf868a296ce0889c929159fd3ac0dcdc00ba6e2ef864b056335954ffab4d/diff:/var/lib/docker/overlay2/592818f0a9ab8c99c2438562d9c5a479083cb08f41ad30ddefa9b493a8098150/diff:/var/lib/docker/overlay2/05fa44df096fbdb60f935bef69e952369e10cd8f1adc570c170c1f08d09743f7/diff:/var/lib/docker/overlay2/65cf48
1c471f231a663edc6331272317a851b0fd3395b2d6be43899f9b5443f9/diff:/var/lib/docker/overlay2/86a49202a7a2abea1a7a57c52f2e19cf991e9319b11dcf7f4105892413e0096e/diff:/var/lib/docker/overlay2/6f1a2a1805e90af512eef163f484935d57b0a0a72a65b5775c38efb5fe937d13/diff:/var/lib/docker/overlay2/f514eb992d78367550d2ce1c020771f984eff047302d6af3dcd71172270c0087/diff:/var/lib/docker/overlay2/cdae383a0302a2ed835985028caad813d6e16c7caf0e146005b47a4a1e94369b/diff:/var/lib/docker/overlay2/1fe02987c4f18db439e026431c44d1b8b233da766c4dcd331a4c20fd91e88748/diff:/var/lib/docker/overlay2/28bb6b828668bb682f3523f9f71fbd429ef6180cd98455625ea014ef4e532b77/diff:/var/lib/docker/overlay2/2878ae01f9dccb219097c2fc5873be677fe8a70e2bd936d25f018483a3d6c1a3/diff:/var/lib/docker/overlay2/da86efa9bad2d8b46c88440b1c4864add391f7cc9ee07d4227dd37d76c9ffd85/diff:/var/lib/docker/overlay2/93bf17ee1f2be70c70aa38b3aea9daa3a62e189181d1aa56acc52f5cf90f7436/diff:/var/lib/docker/overlay2/e8db603cd7ef789cc310bf7782375ee624d80ac0bceb3a3075900e67540c152a/diff:/var/lib/d
ocker/overlay2/1ba1c50d72480f49b835b34db6c3b21bccdf6530a8201631099a7e287dbf94a6/diff:/var/lib/docker/overlay2/2eeb0aecf508bc4a797d240b330bfa5b54b84a085737a96dad9faf9ea129d4ce/diff:/var/lib/docker/overlay2/40ba0dc970df76cce520350594f86384e121b47a190106559752e8f7cf138540/diff:/var/lib/docker/overlay2/8b80e2d0e53fb136919fd9ca7d4d198a92c324a19aa1827fcd672a3aa49a0dcc/diff:/var/lib/docker/overlay2/bd1d10c9a8876242ad61e3841c439177c3ae3a1967e5b053729107676134c622/diff:/var/lib/docker/overlay2/1f65ee5880c0ce6e027f2e72118cb956fcdcba9bb0806d98de6532bcbdfac6dd/diff:/var/lib/docker/overlay2/8bde131df5f26fa8ad65d4ddcb9ddb84de567427ffeb4aee76487489e530ef77/diff:/var/lib/docker/overlay2/73a68cda04f65298e5bd2581f0f16a8de96cd42145968d5c39b0c8ae3228a423/diff:/var/lib/docker/overlay2/0acfb650c0dfd3b9df005fe48f305b155d61d21914271c258e95bf42636e9bf8/diff:/var/lib/docker/overlay2/33e44b42ef88ffe285cb67e240248a629167dd1c83362d51a67679cbd866c30b/diff:/var/lib/docker/overlay2/c71ddf9e0475e4a3987bf3d616723e9d055a79c4bb6d92cf34fc67be2d9
c5134/diff:/var/lib/docker/overlay2/faece61d8cf45bab42cd499befcdb8d4548229311b35600cced068a3ce186025/diff:/var/lib/docker/overlay2/5e45db6ab7620f8baf25ff5ff01c806cd64575cdc89e5bb2965a25800093d5f9/diff:/var/lib/docker/overlay2/7324fe47bf39776cd61539f09f63fbb6f5e17aedc985bcc48bc12cdaaff4c4e4/diff:/var/lib/docker/overlay2/98a50fa9fa33654b2ed4ce8f7118f04d7004260519abca0d403b2ab337e9c273/diff:/var/lib/docker/overlay2/0e38707c109f3f1c54c7190021d525f898a86b07f2e51c86dc3e6d916eff3ed7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7f675404c0f3aa5467be8c3ed12f9732feb4ba9a8414937f296fe8966821c42e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7f675404c0f3aa5467be8c3ed12f9732feb4ba9a8414937f296fe8966821c42e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7f675404c0f3aa5467be8c3ed12f9732feb4ba9a8414937f296fe8966821c42e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20210816221528-6487",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20210816221528-6487/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20210816221528-6487",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20210816221528-6487",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20210816221528-6487",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f953e5a42aefe93ff936ae1ccb6e431e9b8ee88db4d57d588759e20ec213f770",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32929"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32928"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32925"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32927"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32926"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f953e5a42aef",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20210816221528-6487": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2c6179f59cfd"
	                    ],
	                    "NetworkID": "4ed2783b447d2bd79ed0b03bc7819d26847626cfb8bbf7b3d91e9b95dcd18515",
	                    "EndpointID": "bc955fb0f881e5f66e6a1389b23de0f62cddbe9fabcbdfe55a25e530213af001",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210816221528-6487 -n old-k8s-version-20210816221528-6487

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210816221528-6487 -n old-k8s-version-20210816221528-6487: exit status 2 (354.431801ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20210816221528-6487 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p old-k8s-version-20210816221528-6487 logs -n 25: exit status 110 (41.010472887s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | no-preload-20210816221555-6487                             | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:29 UTC | Mon, 16 Aug 2021 22:24:29 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | no-preload-20210816221555-6487                             | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:30 UTC | Mon, 16 Aug 2021 22:24:31 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:32 UTC | Mon, 16 Aug 2021 22:24:35 UTC |
	|         | no-preload-20210816221555-6487                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210816221555-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:36 UTC | Mon, 16 Aug 2021 22:24:36 UTC |
	|         | no-preload-20210816221555-6487                             |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20210816222436-6487 --memory=2200            | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:36 UTC | Mon, 16 Aug 2021 22:25:25 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:25 UTC | Mon, 16 Aug 2021 22:25:25 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:25 UTC | Mon, 16 Aug 2021 22:25:46 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:46 UTC | Mon, 16 Aug 2021 22:25:46 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20210816222436-6487 --memory=2200            | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:46 UTC | Mon, 16 Aug 2021 22:26:11 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:26:12 UTC | Mon, 16 Aug 2021 22:26:12 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:02 UTC | Mon, 16 Aug 2021 22:26:46 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:26:56 UTC | Mon, 16 Aug 2021 22:26:57 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:15 UTC | Mon, 16 Aug 2021 22:27:35 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:27:46 UTC | Mon, 16 Aug 2021 22:27:46 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:28:07 UTC | Mon, 16 Aug 2021 22:28:11 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20210816222436-6487                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:28:11 UTC | Mon, 16 Aug 2021 22:28:12 UTC |
	|         | newest-cni-20210816222436-6487                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:28:11 UTC | Mon, 16 Aug 2021 22:28:15 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210816221913-6487                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:28:15 UTC | Mon, 16 Aug 2021 22:28:16 UTC |
	|         | embed-certs-20210816221913-6487                            |                                                |         |         |                               |                               |
	| start   | -p                                                         | old-k8s-version-20210816221528-6487            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:17:43 UTC | Mon, 16 Aug 2021 22:28:59 UTC |
	|         | old-k8s-version-20210816221528-6487                        |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                |         |         |                               |                               |
	|         | --disable-driver-mounts                                    |                                                |         |         |                               |                               |
	|         | --keep-context=false                                       |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                               |                                                |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:28:46 UTC | Mon, 16 Aug 2021 22:29:00 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210816221939-6487 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:29:00 UTC | Mon, 16 Aug 2021 22:29:01 UTC |
	|         | default-k8s-different-port-20210816221939-6487             |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | old-k8s-version-20210816221528-6487            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:29:09 UTC | Mon, 16 Aug 2021 22:29:10 UTC |
	|         | old-k8s-version-20210816221528-6487                        |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| start   | -p auto-20210816221527-6487                                | auto-20210816221527-6487                       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:28:12 UTC | Mon, 16 Aug 2021 22:29:50 UTC |
	|         | --memory=2048                                              |                                                |         |         |                               |                               |
	|         | --alsologtostderr                                          |                                                |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                              |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                |         |         |                               |                               |
	| ssh     | -p auto-20210816221527-6487                                | auto-20210816221527-6487                       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:29:50 UTC | Mon, 16 Aug 2021 22:29:51 UTC |
	|         | pgrep -a kubelet                                           |                                                |         |         |                               |                               |
	| start   | -p kindnet-20210816221528-6487                             | kindnet-20210816221528-6487                    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:28:16 UTC | Mon, 16 Aug 2021 22:29:53 UTC |
	|         | --memory=2048                                              |                                                |         |         |                               |                               |
	|         | --alsologtostderr                                          |                                                |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                              |                                                |         |         |                               |                               |
	|         | --cni=kindnet --driver=docker                              |                                                |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                |         |         |                               |                               |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:29:01
	Running on machine: debian-jenkins-agent-13
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:29:01.104972  287041 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:29:01.105042  287041 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:29:01.105046  287041 out.go:311] Setting ErrFile to fd 2...
	I0816 22:29:01.105049  287041 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:29:01.105176  287041 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:29:01.105429  287041 out.go:305] Setting JSON to false
	I0816 22:29:01.147736  287041 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-13","uptime":4108,"bootTime":1629148833,"procs":304,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:29:01.147869  287041 start.go:121] virtualization: kvm guest
	I0816 22:29:01.150177  287041 out.go:177] * [enable-default-cni-20210816221527-6487] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:29:01.151769  287041 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:29:01.150341  287041 notify.go:169] Checking for updates...
	I0816 22:29:01.153187  287041 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:29:01.154624  287041 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:28:57.951171  280208 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (3.155815961s)
	I0816 22:28:58.294590  280208 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:28:59.294880  280208 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:28:59.795351  280208 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:00.295190  280208 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:00.794850  280208 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:01.155967  287041 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:29:01.156497  287041 config.go:177] Loaded profile config "auto-20210816221527-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 22:29:01.156588  287041 config.go:177] Loaded profile config "kindnet-20210816221528-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 22:29:01.156694  287041 config.go:177] Loaded profile config "old-k8s-version-20210816221528-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0816 22:29:01.156731  287041 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:29:01.211752  287041 docker.go:132] docker version: linux-19.03.15
	I0816 22:29:01.211835  287041 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:29:01.310520  287041 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-16 22:29:01.260643746 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 22:29:01.310607  287041 docker.go:244] overlay module found
	I0816 22:29:01.312898  287041 out.go:177] * Using the docker driver based on user configuration
	I0816 22:29:01.312931  287041 start.go:278] selected driver: docker
	I0816 22:29:01.312936  287041 start.go:751] validating driver "docker" against <nil>
	I0816 22:29:01.312959  287041 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0816 22:29:01.313002  287041 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:29:01.313039  287041 out.go:242] ! Your cgroup does not allow setting memory.
	I0816 22:29:01.314964  287041 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:29:01.316997  287041 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:29:01.414755  287041 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-16 22:29:01.362116027 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 22:29:01.414867  287041 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	E0816 22:29:01.414994  287041 start_flags.go:390] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0816 22:29:01.415019  287041 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 22:29:01.415046  287041 cni.go:93] Creating CNI manager for "bridge"
	I0816 22:29:01.415057  287041 start_flags.go:272] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 22:29:01.415071  287041 start_flags.go:277] config:
	{Name:enable-default-cni-20210816221527-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:enable-default-cni-20210816221527-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:29:01.417311  287041 out.go:177] * Starting control plane node enable-default-cni-20210816221527-6487 in cluster enable-default-cni-20210816221527-6487
	I0816 22:29:01.417362  287041 cache.go:117] Beginning downloading kic base image for docker with crio
	I0816 22:29:01.419157  287041 out.go:177] * Pulling base image ...
	I0816 22:29:01.419191  287041 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0816 22:29:01.419222  287041 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0816 22:29:01.419234  287041 cache.go:56] Caching tarball of preloaded images
	I0816 22:29:01.419286  287041 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0816 22:29:01.419454  287041 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 22:29:01.419470  287041 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0816 22:29:01.419553  287041 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/config.json ...
	I0816 22:29:01.419579  287041 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/config.json: {Name:mk677dd39d86b1fa44630c8acfcbb06dfda0323d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:29:01.519535  287041 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0816 22:29:01.519561  287041 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0816 22:29:01.519570  287041 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:29:01.519606  287041 start.go:313] acquiring machines lock for enable-default-cni-20210816221527-6487: {Name:mkb4ba53a6c846dcc3a3f0e6e8acfdcda1ff27bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:29:01.519728  287041 start.go:317] acquired machines lock for "enable-default-cni-20210816221527-6487" in 99.625µs
	I0816 22:29:01.519754  287041 start.go:89] Provisioning new machine with config: &{Name:enable-default-cni-20210816221527-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:enable-default-cni-20210816221527-6487 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 22:29:01.519820  287041 start.go:126] createHost starting for "" (driver="docker")
	I0816 22:28:57.938613  278507 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (4.810624549s)
	I0816 22:28:58.128008  278507 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:28:59.172780  278507 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.04473644s)
	I0816 22:28:59.628477  278507 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:00.128201  278507 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:00.627692  278507 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:01.128019  278507 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:01.627836  278507 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:01.295290  280208 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:01.795470  280208 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:01.871216  280208 kubeadm.go:985] duration metric: took 18.767227943s to wait for elevateKubeSystemPrivileges.
	I0816 22:29:01.871243  280208 kubeadm.go:392] StartCluster complete in 36.933306492s
	I0816 22:29:01.871262  280208 settings.go:142] acquiring lock: {Name:mk71c9e00e0a208f5191d6b85d29a074b46503a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:29:01.871365  280208 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:29:01.873049  280208 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:29:02.393143  280208 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kindnet-20210816221528-6487" rescaled to 1
	I0816 22:29:02.393200  280208 start.go:226] Will wait 5m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 22:29:02.394696  280208 out.go:177] * Verifying Kubernetes components...
	I0816 22:29:02.394760  280208 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:29:02.393267  280208 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:29:02.393285  280208 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0816 22:29:02.394877  280208 addons.go:59] Setting storage-provisioner=true in profile "kindnet-20210816221528-6487"
	I0816 22:29:02.394897  280208 addons.go:135] Setting addon storage-provisioner=true in "kindnet-20210816221528-6487"
	W0816 22:29:02.394907  280208 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:29:02.393461  280208 config.go:177] Loaded profile config "kindnet-20210816221528-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 22:29:02.394940  280208 host.go:66] Checking if "kindnet-20210816221528-6487" exists ...
	I0816 22:29:02.394944  280208 addons.go:59] Setting default-storageclass=true in profile "kindnet-20210816221528-6487"
	I0816 22:29:02.394964  280208 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-20210816221528-6487"
	I0816 22:29:02.395341  280208 cli_runner.go:115] Run: docker container inspect kindnet-20210816221528-6487 --format={{.State.Status}}
	I0816 22:29:02.395544  280208 cli_runner.go:115] Run: docker container inspect kindnet-20210816221528-6487 --format={{.State.Status}}
	I0816 22:29:02.128501  278507 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:02.221985  278507 kubeadm.go:985] duration metric: took 21.7915582s to wait for elevateKubeSystemPrivileges.
	I0816 22:29:02.222017  278507 kubeadm.go:392] StartCluster complete in 41.069618548s
	I0816 22:29:02.222038  278507 settings.go:142] acquiring lock: {Name:mk71c9e00e0a208f5191d6b85d29a074b46503a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:29:02.222137  278507 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:29:02.224253  278507 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:29:02.742147  278507 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "auto-20210816221527-6487" rescaled to 1
	I0816 22:29:02.742218  278507 start.go:226] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 22:29:02.744670  278507 out.go:177] * Verifying Kubernetes components...
	I0816 22:29:02.742382  278507 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:29:02.742408  278507 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0816 22:29:02.742579  278507 config.go:177] Loaded profile config "auto-20210816221527-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 22:29:02.744778  278507 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:29:02.744911  278507 addons.go:59] Setting storage-provisioner=true in profile "auto-20210816221527-6487"
	I0816 22:29:02.744928  278507 addons.go:135] Setting addon storage-provisioner=true in "auto-20210816221527-6487"
	W0816 22:29:02.744934  278507 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:29:02.744960  278507 host.go:66] Checking if "auto-20210816221527-6487" exists ...
	I0816 22:29:02.745255  278507 addons.go:59] Setting default-storageclass=true in profile "auto-20210816221527-6487"
	I0816 22:29:02.745276  278507 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-20210816221527-6487"
	I0816 22:29:02.745532  278507 cli_runner.go:115] Run: docker container inspect auto-20210816221527-6487 --format={{.State.Status}}
	I0816 22:29:02.745562  278507 cli_runner.go:115] Run: docker container inspect auto-20210816221527-6487 --format={{.State.Status}}
	I0816 22:29:02.833299  278507 addons.go:135] Setting addon default-storageclass=true in "auto-20210816221527-6487"
	W0816 22:29:02.833375  278507 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:29:02.833418  278507 host.go:66] Checking if "auto-20210816221527-6487" exists ...
	I0816 22:29:02.834044  278507 cli_runner.go:115] Run: docker container inspect auto-20210816221527-6487 --format={{.State.Status}}
	I0816 22:29:02.461313  280208 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:29:02.461436  280208 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:29:02.461451  280208 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:29:02.461535  280208 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210816221528-6487
	I0816 22:29:02.470273  280208 addons.go:135] Setting addon default-storageclass=true in "kindnet-20210816221528-6487"
	W0816 22:29:02.470302  280208 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:29:02.470329  280208 host.go:66] Checking if "kindnet-20210816221528-6487" exists ...
	I0816 22:29:02.470842  280208 cli_runner.go:115] Run: docker container inspect kindnet-20210816221528-6487 --format={{.State.Status}}
	I0816 22:29:02.517492  280208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kindnet-20210816221528-6487/id_rsa Username:docker}
	I0816 22:29:02.520548  280208 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:29:02.521024  280208 node_ready.go:35] waiting up to 5m0s for node "kindnet-20210816221528-6487" to be "Ready" ...
	I0816 22:29:02.525092  280208 node_ready.go:49] node "kindnet-20210816221528-6487" has status "Ready":"True"
	I0816 22:29:02.525113  280208 node_ready.go:38] duration metric: took 4.05258ms waiting for node "kindnet-20210816221528-6487" to be "Ready" ...
	I0816 22:29:02.525124  280208 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:29:02.531556  280208 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:29:02.531577  280208 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:29:02.531633  280208 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210816221528-6487
	I0816 22:29:02.537178  280208 pod_ready.go:78] waiting up to 5m0s for pod "coredns-558bd4d5db-gkxqs" in "kube-system" namespace to be "Ready" ...
	I0816 22:29:02.603194  280208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kindnet-20210816221528-6487/id_rsa Username:docker}
	I0816 22:29:02.660123  280208 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:29:02.732946  280208 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:29:03.221238  280208 start.go:728] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0816 22:29:02.853372  278507 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:29:02.853501  278507 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:29:02.853513  278507 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:29:02.853580  278507 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210816221527-6487
	I0816 22:29:02.911493  278507 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:29:02.911523  278507 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:29:02.911604  278507 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210816221527-6487
	I0816 22:29:02.915995  278507 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:29:02.918559  278507 node_ready.go:35] waiting up to 5m0s for node "auto-20210816221527-6487" to be "Ready" ...
	I0816 22:29:02.923723  278507 node_ready.go:49] node "auto-20210816221527-6487" has status "Ready":"True"
	I0816 22:29:02.923749  278507 node_ready.go:38] duration metric: took 5.163592ms waiting for node "auto-20210816221527-6487" to be "Ready" ...
	I0816 22:29:02.923761  278507 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:29:02.940547  278507 pod_ready.go:78] waiting up to 5m0s for pod "coredns-558bd4d5db-lqdfg" in "kube-system" namespace to be "Ready" ...
	I0816 22:29:02.943979  278507 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32974 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/auto-20210816221527-6487/id_rsa Username:docker}
	I0816 22:29:03.017828  278507 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32974 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/auto-20210816221527-6487/id_rsa Username:docker}
	I0816 22:29:03.141057  278507 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:29:03.144559  278507 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:29:03.519728  278507 start.go:728] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0816 22:29:01.522037  287041 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0816 22:29:01.522302  287041 start.go:160] libmachine.API.Create for "enable-default-cni-20210816221527-6487" (driver="docker")
	I0816 22:29:01.522335  287041 client.go:168] LocalClient.Create starting
	I0816 22:29:01.522430  287041 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem
	I0816 22:29:01.522463  287041 main.go:130] libmachine: Decoding PEM data...
	I0816 22:29:01.522484  287041 main.go:130] libmachine: Parsing certificate...
	I0816 22:29:01.522634  287041 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem
	I0816 22:29:01.522664  287041 main.go:130] libmachine: Decoding PEM data...
	I0816 22:29:01.522683  287041 main.go:130] libmachine: Parsing certificate...
	I0816 22:29:01.523085  287041 cli_runner.go:115] Run: docker network inspect enable-default-cni-20210816221527-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0816 22:29:01.565972  287041 cli_runner.go:162] docker network inspect enable-default-cni-20210816221527-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0816 22:29:01.566049  287041 network_create.go:255] running [docker network inspect enable-default-cni-20210816221527-6487] to gather additional debugging logs...
	I0816 22:29:01.566071  287041 cli_runner.go:115] Run: docker network inspect enable-default-cni-20210816221527-6487
	W0816 22:29:01.604016  287041 cli_runner.go:162] docker network inspect enable-default-cni-20210816221527-6487 returned with exit code 1
	I0816 22:29:01.604052  287041 network_create.go:258] error running [docker network inspect enable-default-cni-20210816221527-6487]: docker network inspect enable-default-cni-20210816221527-6487: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-20210816221527-6487
	I0816 22:29:01.604066  287041 network_create.go:260] output of [docker network inspect enable-default-cni-20210816221527-6487]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-20210816221527-6487
	
	** /stderr **
	I0816 22:29:01.604120  287041 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 22:29:01.650271  287041 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000010048] misses:0}
	I0816 22:29:01.650342  287041 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0816 22:29:01.650369  287041 network_create.go:106] attempt to create docker network enable-default-cni-20210816221527-6487 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0816 22:29:01.650427  287041 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20210816221527-6487
	I0816 22:29:01.730326  287041 network_create.go:90] docker network enable-default-cni-20210816221527-6487 192.168.49.0/24 created
	I0816 22:29:01.730356  287041 kic.go:106] calculated static IP "192.168.49.2" for the "enable-default-cni-20210816221527-6487" container
	I0816 22:29:01.730415  287041 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0816 22:29:01.772493  287041 cli_runner.go:115] Run: docker volume create enable-default-cni-20210816221527-6487 --label name.minikube.sigs.k8s.io=enable-default-cni-20210816221527-6487 --label created_by.minikube.sigs.k8s.io=true
	I0816 22:29:01.813939  287041 oci.go:102] Successfully created a docker volume enable-default-cni-20210816221527-6487
	I0816 22:29:01.814044  287041 cli_runner.go:115] Run: docker run --rm --name enable-default-cni-20210816221527-6487-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20210816221527-6487 --entrypoint /usr/bin/test -v enable-default-cni-20210816221527-6487:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0816 22:29:02.676951  287041 oci.go:106] Successfully prepared a docker volume enable-default-cni-20210816221527-6487
	W0816 22:29:02.677002  287041 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0816 22:29:02.677011  287041 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0816 22:29:02.677027  287041 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0816 22:29:02.677061  287041 kic.go:179] Starting extracting preloaded images to volume ...
	I0816 22:29:02.677072  287041 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0816 22:29:02.677151  287041 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20210816221527-6487:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0816 22:29:02.818655  287041 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-20210816221527-6487 --name enable-default-cni-20210816221527-6487 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20210816221527-6487 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-20210816221527-6487 --network enable-default-cni-20210816221527-6487 --ip 192.168.49.2 --volume enable-default-cni-20210816221527-6487:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0816 22:29:03.526479  287041 cli_runner.go:115] Run: docker container inspect enable-default-cni-20210816221527-6487 --format={{.State.Running}}
	I0816 22:29:03.587060  287041 cli_runner.go:115] Run: docker container inspect enable-default-cni-20210816221527-6487 --format={{.State.Status}}
	I0816 22:29:03.639274  287041 cli_runner.go:115] Run: docker exec enable-default-cni-20210816221527-6487 stat /var/lib/dpkg/alternatives/iptables
	I0816 22:29:03.786973  287041 oci.go:278] the created container "enable-default-cni-20210816221527-6487" has a running status.
	I0816 22:29:03.787014  287041 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/enable-default-cni-20210816221527-6487/id_rsa...
	I0816 22:29:03.986086  287041 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/enable-default-cni-20210816221527-6487/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0816 22:29:04.358534  287041 cli_runner.go:115] Run: docker container inspect enable-default-cni-20210816221527-6487 --format={{.State.Status}}
	I0816 22:29:04.404113  287041 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0816 22:29:04.404138  287041 kic_runner.go:115] Args: [docker exec --privileged enable-default-cni-20210816221527-6487 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0816 22:29:03.382302  280208 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0816 22:29:03.382329  280208 addons.go:344] enableAddons completed in 989.062223ms
	I0816 22:29:04.557240  280208 pod_ready.go:102] pod "coredns-558bd4d5db-gkxqs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:03.757088  278507 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0816 22:29:03.757127  278507 addons.go:344] enableAddons completed in 1.014730525s
	I0816 22:29:04.985773  278507 pod_ready.go:102] pod "coredns-558bd4d5db-lqdfg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:07.003267  278507 pod_ready.go:102] pod "coredns-558bd4d5db-lqdfg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:06.734224  287041 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20210816221527-6487:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.057025876s)
	I0816 22:29:06.734258  287041 kic.go:188] duration metric: took 4.057197 seconds to extract preloaded images to volume
	I0816 22:29:06.734321  287041 cli_runner.go:115] Run: docker container inspect enable-default-cni-20210816221527-6487 --format={{.State.Status}}
	I0816 22:29:06.775191  287041 machine.go:88] provisioning docker machine ...
	I0816 22:29:06.775232  287041 ubuntu.go:169] provisioning hostname "enable-default-cni-20210816221527-6487"
	I0816 22:29:06.775330  287041 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210816221527-6487
	I0816 22:29:06.814158  287041 main.go:130] libmachine: Using SSH client type: native
	I0816 22:29:06.814327  287041 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32984 <nil> <nil>}
	I0816 22:29:06.814345  287041 main.go:130] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-20210816221527-6487 && echo "enable-default-cni-20210816221527-6487" | sudo tee /etc/hostname
	I0816 22:29:06.947664  287041 main.go:130] libmachine: SSH cmd err, output: <nil>: enable-default-cni-20210816221527-6487
	
	I0816 22:29:06.947744  287041 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210816221527-6487
	I0816 22:29:06.991268  287041 main.go:130] libmachine: Using SSH client type: native
	I0816 22:29:06.991415  287041 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32984 <nil> <nil>}
	I0816 22:29:06.991436  287041 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-20210816221527-6487' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-20210816221527-6487/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-20210816221527-6487' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:29:07.115326  287041 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:29:07.115360  287041 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem ServerCertRemotePath:/e
tc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:29:07.115387  287041 ubuntu.go:177] setting up certificates
	I0816 22:29:07.115413  287041 provision.go:83] configureAuth start
	I0816 22:29:07.115469  287041 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-20210816221527-6487
	I0816 22:29:07.156704  287041 provision.go:138] copyHostCerts
	I0816 22:29:07.156762  287041 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:29:07.156772  287041 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:29:07.156823  287041 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:29:07.156892  287041 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:29:07.156903  287041 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:29:07.156925  287041 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:29:07.156972  287041 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:29:07.156979  287041 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:29:07.156995  287041 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0816 22:29:07.157046  287041 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-20210816221527-6487 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube enable-default-cni-20210816221527-6487]
	I0816 22:29:07.359663  287041 provision.go:172] copyRemoteCerts
	I0816 22:29:07.359720  287041 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:29:07.359764  287041 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210816221527-6487
	I0816 22:29:07.400845  287041 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/enable-default-cni-20210816221527-6487/id_rsa Username:docker}
	I0816 22:29:07.491679  287041 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:29:07.507602  287041 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1285 bytes)
	I0816 22:29:07.522891  287041 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 22:29:07.538118  287041 provision.go:86] duration metric: configureAuth took 422.691864ms
	I0816 22:29:07.538140  287041 ubuntu.go:193] setting minikube options for container-runtime
	I0816 22:29:07.538268  287041 config.go:177] Loaded profile config "enable-default-cni-20210816221527-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 22:29:07.538369  287041 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210816221527-6487
	I0816 22:29:07.579820  287041 main.go:130] libmachine: Using SSH client type: native
	I0816 22:29:07.580005  287041 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32984 <nil> <nil>}
	I0816 22:29:07.580027  287041 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 22:29:07.948839  287041 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 22:29:07.948876  287041 machine.go:91] provisioned docker machine in 1.173662601s
	I0816 22:29:07.948888  287041 client.go:171] LocalClient.Create took 6.426532089s
	I0816 22:29:07.948900  287041 start.go:168] duration metric: libmachine.API.Create for "enable-default-cni-20210816221527-6487" took 6.426603422s
	I0816 22:29:07.948909  287041 start.go:267] post-start starting for "enable-default-cni-20210816221527-6487" (driver="docker")
	I0816 22:29:07.948919  287041 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:29:07.949005  287041 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:29:07.949073  287041 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210816221527-6487
	I0816 22:29:07.990570  287041 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/enable-default-cni-20210816221527-6487/id_rsa Username:docker}
	I0816 22:29:08.079230  287041 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:29:08.081821  287041 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0816 22:29:08.081843  287041 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0816 22:29:08.081851  287041 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0816 22:29:08.081856  287041 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0816 22:29:08.081871  287041 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:29:08.081913  287041 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:29:08.081995  287041 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem -> 64872.pem in /etc/ssl/certs
	I0816 22:29:08.082085  287041 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:29:08.088376  287041 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:29:08.103781  287041 start.go:270] post-start completed in 154.856777ms
	I0816 22:29:08.104121  287041 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-20210816221527-6487
	I0816 22:29:08.144086  287041 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/config.json ...
	I0816 22:29:08.144335  287041 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 22:29:08.144383  287041 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210816221527-6487
	I0816 22:29:08.184847  287041 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/enable-default-cni-20210816221527-6487/id_rsa Username:docker}
	I0816 22:29:08.271638  287041 start.go:129] duration metric: createHost completed in 6.751805756s
	I0816 22:29:08.271665  287041 start.go:80] releasing machines lock for "enable-default-cni-20210816221527-6487", held for 6.75192288s
	I0816 22:29:08.271749  287041 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-20210816221527-6487
	I0816 22:29:08.311792  287041 ssh_runner.go:149] Run: systemctl --version
	I0816 22:29:08.311844  287041 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210816221527-6487
	I0816 22:29:08.311860  287041 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:29:08.311939  287041 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210816221527-6487
	I0816 22:29:08.357194  287041 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/enable-default-cni-20210816221527-6487/id_rsa Username:docker}
	I0816 22:29:08.358254  287041 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/enable-default-cni-20210816221527-6487/id_rsa Username:docker}
	I0816 22:29:08.476481  287041 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0816 22:29:08.494584  287041 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0816 22:29:08.502785  287041 docker.go:153] disabling docker service ...
	I0816 22:29:08.502822  287041 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:29:08.511798  287041 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:29:08.520496  287041 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:29:08.586786  287041 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:29:08.654110  287041 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:29:08.662644  287041 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:29:08.674155  287041 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0816 22:29:08.681389  287041 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:29:08.687169  287041 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:29:08.687220  287041 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:29:08.693699  287041 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:29:08.699557  287041 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:29:08.756179  287041 ssh_runner.go:149] Run: sudo systemctl start crio
	I0816 22:29:08.764791  287041 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 22:29:08.764838  287041 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0816 22:29:08.767745  287041 start.go:413] Will wait 60s for crictl version
	I0816 22:29:08.767786  287041 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:29:08.794263  287041 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0816 22:29:08.794324  287041 ssh_runner.go:149] Run: crio --version
	I0816 22:29:08.850590  287041 ssh_runner.go:149] Run: crio --version
	I0816 22:29:08.912869  287041 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	I0816 22:29:08.912942  287041 cli_runner.go:115] Run: docker network inspect enable-default-cni-20210816221527-6487 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 22:29:08.955570  287041 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0816 22:29:08.959326  287041 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:29:08.968419  287041 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0816 22:29:08.968480  287041 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:29:09.011377  287041 crio.go:424] all images are preloaded for cri-o runtime.
	I0816 22:29:09.011399  287041 crio.go:333] Images already preloaded, skipping extraction
	I0816 22:29:09.011446  287041 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:29:09.034810  287041 crio.go:424] all images are preloaded for cri-o runtime.
	I0816 22:29:09.034829  287041 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:29:09.034895  287041 ssh_runner.go:149] Run: crio config
	I0816 22:29:09.101341  287041 cni.go:93] Creating CNI manager for "bridge"
	I0816 22:29:09.101363  287041 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0816 22:29:09.101374  287041 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-20210816221527-6487 NodeName:enable-default-cni-20210816221527-6487 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd Cl
ientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:29:09.101486  287041 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "enable-default-cni-20210816221527-6487"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:29:09.101594  287041 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=enable-default-cni-20210816221527-6487 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:enable-default-cni-20210816221527-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:}
	I0816 22:29:09.101672  287041 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0816 22:29:09.108520  287041 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:29:09.108584  287041 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:29:09.114954  287041 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (569 bytes)
	I0816 22:29:09.126940  287041 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 22:29:09.138998  287041 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2079 bytes)
	I0816 22:29:09.150578  287041 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0816 22:29:09.153307  287041 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:29:09.161690  287041 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487 for IP: 192.168.49.2
	I0816 22:29:09.161741  287041 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:29:09.161762  287041 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:29:09.161818  287041 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/client.key
	I0816 22:29:09.161833  287041 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/client.crt with IP's: []
	I0816 22:29:09.449885  287041 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/client.crt ...
	I0816 22:29:09.449919  287041 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/client.crt: {Name:mk787487aff5e89283c8237ab26c20ab89fb98cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:29:09.450100  287041 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/client.key ...
	I0816 22:29:09.450115  287041 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/client.key: {Name:mk0024176c9cc5bca546e7fd653ef097eec1e9f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:29:09.450202  287041 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/apiserver.key.dd3b5fb2
	I0816 22:29:09.450211  287041 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0816 22:29:09.609347  287041 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/apiserver.crt.dd3b5fb2 ...
	I0816 22:29:09.609376  287041 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/apiserver.crt.dd3b5fb2: {Name:mka21856498bddf5b84651feac7da58004ff5027 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:29:09.609543  287041 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/apiserver.key.dd3b5fb2 ...
	I0816 22:29:09.609559  287041 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/apiserver.key.dd3b5fb2: {Name:mk6533a98a19c316486022d7c38e7254d06a9017 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:29:09.609641  287041 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/apiserver.crt
	I0816 22:29:09.609696  287041 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/apiserver.key
	I0816 22:29:09.609744  287041 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/proxy-client.key
	I0816 22:29:09.609753  287041 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/proxy-client.crt with IP's: []
	I0816 22:29:09.914264  287041 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/proxy-client.crt ...
	I0816 22:29:09.914298  287041 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/proxy-client.crt: {Name:mk48e6ae98d60f94f52b7619bfb5c2c07c65c4ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:29:09.914475  287041 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/proxy-client.key ...
	I0816 22:29:09.914487  287041 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/proxy-client.key: {Name:mked0bd6193e1ab4657b0eb9a2fbeb1be39a51e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:29:09.914646  287041 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem (1338 bytes)
	W0816 22:29:09.914680  287041 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487_empty.pem, impossibly tiny 0 bytes
	I0816 22:29:09.914690  287041 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 22:29:09.914715  287041 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:29:09.914739  287041 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:29:09.914764  287041 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0816 22:29:09.914814  287041 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem (1708 bytes)
	I0816 22:29:09.915685  287041 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:29:09.934308  287041 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 22:29:10.028545  287041 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:29:10.044046  287041 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 22:29:10.060095  287041 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:29:10.076045  287041 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 22:29:10.092462  287041 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:29:10.109485  287041 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 22:29:10.126092  287041 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/64872.pem --> /usr/share/ca-certificates/64872.pem (1708 bytes)
	I0816 22:29:10.142946  287041 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:29:10.161311  287041 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6487.pem --> /usr/share/ca-certificates/6487.pem (1338 bytes)
	I0816 22:29:10.177140  287041 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:29:10.189371  287041 ssh_runner.go:149] Run: openssl version
	I0816 22:29:10.194096  287041 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:29:10.201394  287041 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:29:10.204217  287041 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:29:10.204268  287041 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:29:10.208773  287041 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:29:10.215971  287041 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6487.pem && ln -fs /usr/share/ca-certificates/6487.pem /etc/ssl/certs/6487.pem"
	I0816 22:29:10.223388  287041 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6487.pem
	I0816 22:29:10.226167  287041 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:50 /usr/share/ca-certificates/6487.pem
	I0816 22:29:10.226211  287041 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6487.pem
	I0816 22:29:10.230987  287041 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6487.pem /etc/ssl/certs/51391683.0"
	I0816 22:29:10.238153  287041 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/64872.pem && ln -fs /usr/share/ca-certificates/64872.pem /etc/ssl/certs/64872.pem"
	I0816 22:29:10.245703  287041 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/64872.pem
	I0816 22:29:10.248452  287041 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:50 /usr/share/ca-certificates/64872.pem
	I0816 22:29:10.248490  287041 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/64872.pem
	I0816 22:29:10.252867  287041 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/64872.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:29:10.259517  287041 kubeadm.go:390] StartCluster: {Name:enable-default-cni-20210816221527-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:enable-default-cni-20210816221527-6487 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:29:10.259598  287041 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 22:29:10.259638  287041 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:29:10.282086  287041 cri.go:76] found id: ""
	I0816 22:29:10.282139  287041 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:29:10.288490  287041 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:29:10.294755  287041 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0816 22:29:10.294799  287041 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:29:10.300722  287041 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:29:10.300766  287041 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0816 22:29:10.609613  287041 out.go:204]   - Generating certificates and keys ...
	I0816 22:29:07.058281  280208 pod_ready.go:102] pod "coredns-558bd4d5db-gkxqs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:09.556570  280208 pod_ready.go:102] pod "coredns-558bd4d5db-gkxqs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:09.486435  278507 pod_ready.go:102] pod "coredns-558bd4d5db-lqdfg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:11.985342  278507 pod_ready.go:102] pod "coredns-558bd4d5db-lqdfg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:12.928708  287041 out.go:204]   - Booting up control plane ...
	I0816 22:29:12.056465  280208 pod_ready.go:102] pod "coredns-558bd4d5db-gkxqs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:14.555232  280208 pod_ready.go:102] pod "coredns-558bd4d5db-gkxqs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:14.485536  278507 pod_ready.go:102] pod "coredns-558bd4d5db-lqdfg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:16.985140  278507 pod_ready.go:102] pod "coredns-558bd4d5db-lqdfg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:16.555781  280208 pod_ready.go:102] pod "coredns-558bd4d5db-gkxqs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:19.057542  280208 pod_ready.go:102] pod "coredns-558bd4d5db-gkxqs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:18.985770  278507 pod_ready.go:102] pod "coredns-558bd4d5db-lqdfg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:20.985840  278507 pod_ready.go:102] pod "coredns-558bd4d5db-lqdfg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:21.557784  280208 pod_ready.go:102] pod "coredns-558bd4d5db-gkxqs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:24.056159  280208 pod_ready.go:102] pod "coredns-558bd4d5db-gkxqs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:26.059388  280208 pod_ready.go:102] pod "coredns-558bd4d5db-gkxqs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:23.486375  278507 pod_ready.go:102] pod "coredns-558bd4d5db-lqdfg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:25.486634  278507 pod_ready.go:102] pod "coredns-558bd4d5db-lqdfg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:27.484023  287041 out.go:204]   - Configuring RBAC rules ...
	I0816 22:29:27.900740  287041 cni.go:93] Creating CNI manager for "bridge"
	I0816 22:29:27.902410  287041 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:29:27.902480  287041 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:29:27.909351  287041 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:29:27.921290  287041 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:29:27.921335  287041 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=enable-default-cni-20210816221527-6487 minikube.k8s.io/updated_at=2021_08_16T22_29_27_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:27.921337  287041 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:27.943328  287041 ops.go:34] apiserver oom_adj: -16
	I0816 22:29:28.050122  287041 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:28.937110  287041 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:29.436804  287041 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:29.937365  287041 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:30.437442  287041 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:30.937109  287041 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:28.061110  280208 pod_ready.go:102] pod "coredns-558bd4d5db-gkxqs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:30.555605  280208 pod_ready.go:102] pod "coredns-558bd4d5db-gkxqs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:27.487480  278507 pod_ready.go:102] pod "coredns-558bd4d5db-lqdfg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:29.986302  278507 pod_ready.go:102] pod "coredns-558bd4d5db-lqdfg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:31.437092  287041 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:31.936763  287041 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:32.437444  287041 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:32.937614  287041 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:33.437095  287041 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:33.937405  287041 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:34.436605  287041 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:34.936969  287041 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:33.064758  280208 pod_ready.go:102] pod "coredns-558bd4d5db-gkxqs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:32.485766  278507 pod_ready.go:102] pod "coredns-558bd4d5db-lqdfg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:34.486036  278507 pod_ready.go:102] pod "coredns-558bd4d5db-lqdfg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:39.211417  287041 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (4.274409949s)
	I0816 22:29:39.436619  287041 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:39.936656  287041 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:40.437355  287041 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:40.936966  287041 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:29:41.009634  287041 kubeadm.go:985] duration metric: took 13.088344497s to wait for elevateKubeSystemPrivileges.
	I0816 22:29:41.009664  287041 kubeadm.go:392] StartCluster complete in 30.75015318s
	I0816 22:29:41.009684  287041 settings.go:142] acquiring lock: {Name:mk71c9e00e0a208f5191d6b85d29a074b46503a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:29:41.009778  287041 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:29:41.012165  287041 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk28ce1df8739dfb9de9d45de196f2af338317cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:29:39.533480  280208 pod_ready.go:102] pod "coredns-558bd4d5db-gkxqs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:41.527265  287041 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "enable-default-cni-20210816221527-6487" rescaled to 1
	I0816 22:29:41.527320  287041 start.go:226] Will wait 5m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 22:29:41.529298  287041 out.go:177] * Verifying Kubernetes components...
	I0816 22:29:41.529347  287041 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:29:41.527366  287041 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:29:41.527386  287041 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0816 22:29:41.527528  287041 config.go:177] Loaded profile config "enable-default-cni-20210816221527-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 22:29:41.529504  287041 addons.go:59] Setting storage-provisioner=true in profile "enable-default-cni-20210816221527-6487"
	I0816 22:29:41.529517  287041 addons.go:59] Setting default-storageclass=true in profile "enable-default-cni-20210816221527-6487"
	I0816 22:29:41.529547  287041 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-20210816221527-6487"
	I0816 22:29:41.529522  287041 addons.go:135] Setting addon storage-provisioner=true in "enable-default-cni-20210816221527-6487"
	W0816 22:29:41.529614  287041 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:29:41.529656  287041 host.go:66] Checking if "enable-default-cni-20210816221527-6487" exists ...
	I0816 22:29:41.529873  287041 cli_runner.go:115] Run: docker container inspect enable-default-cni-20210816221527-6487 --format={{.State.Status}}
	I0816 22:29:41.530107  287041 cli_runner.go:115] Run: docker container inspect enable-default-cni-20210816221527-6487 --format={{.State.Status}}
	I0816 22:29:37.213850  278507 pod_ready.go:102] pod "coredns-558bd4d5db-lqdfg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:40.986599  278507 pod_ready.go:102] pod "coredns-558bd4d5db-lqdfg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:41.587459  287041 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:29:41.587584  287041 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:29:41.587597  287041 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:29:41.587650  287041 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210816221527-6487
	I0816 22:29:41.589105  287041 addons.go:135] Setting addon default-storageclass=true in "enable-default-cni-20210816221527-6487"
	W0816 22:29:41.589126  287041 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:29:41.589167  287041 host.go:66] Checking if "enable-default-cni-20210816221527-6487" exists ...
	I0816 22:29:41.589691  287041 cli_runner.go:115] Run: docker container inspect enable-default-cni-20210816221527-6487 --format={{.State.Status}}
	I0816 22:29:41.635384  287041 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:29:41.638754  287041 node_ready.go:35] waiting up to 5m0s for node "enable-default-cni-20210816221527-6487" to be "Ready" ...
	I0816 22:29:41.639705  287041 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/enable-default-cni-20210816221527-6487/id_rsa Username:docker}
	I0816 22:29:41.645879  287041 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:29:41.645898  287041 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:29:41.645954  287041 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210816221527-6487
	I0816 22:29:41.647414  287041 node_ready.go:49] node "enable-default-cni-20210816221527-6487" has status "Ready":"True"
	I0816 22:29:41.647427  287041 node_ready.go:38] duration metric: took 8.644898ms waiting for node "enable-default-cni-20210816221527-6487" to be "Ready" ...
	I0816 22:29:41.647436  287041 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:29:41.657566  287041 pod_ready.go:78] waiting up to 5m0s for pod "coredns-558bd4d5db-242qn" in "kube-system" namespace to be "Ready" ...
	I0816 22:29:41.690118  287041 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/enable-default-cni-20210816221527-6487/id_rsa Username:docker}
	I0816 22:29:41.830926  287041 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:29:42.028831  287041 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:29:42.226253  287041 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0816 22:29:42.547478  287041 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0816 22:29:42.547508  287041 addons.go:344] enableAddons completed in 1.02013595s
	I0816 22:29:43.673336  287041 pod_ready.go:102] pod "coredns-558bd4d5db-242qn" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:44.672012  287041 pod_ready.go:92] pod "coredns-558bd4d5db-242qn" in "kube-system" namespace has status "Ready":"True"
	I0816 22:29:44.672036  287041 pod_ready.go:81] duration metric: took 3.01444858s waiting for pod "coredns-558bd4d5db-242qn" in "kube-system" namespace to be "Ready" ...
	I0816 22:29:44.672046  287041 pod_ready.go:78] waiting up to 5m0s for pod "coredns-558bd4d5db-9qpwq" in "kube-system" namespace to be "Ready" ...
	I0816 22:29:41.560557  280208 pod_ready.go:102] pod "coredns-558bd4d5db-gkxqs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:44.057539  280208 pod_ready.go:102] pod "coredns-558bd4d5db-gkxqs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:42.986862  278507 pod_ready.go:102] pod "coredns-558bd4d5db-lqdfg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:45.486259  278507 pod_ready.go:102] pod "coredns-558bd4d5db-lqdfg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:47.985998  278507 pod_ready.go:102] pod "coredns-558bd4d5db-lqdfg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:49.485956  278507 pod_ready.go:92] pod "coredns-558bd4d5db-lqdfg" in "kube-system" namespace has status "Ready":"True"
	I0816 22:29:49.485980  278507 pod_ready.go:81] duration metric: took 46.545406375s waiting for pod "coredns-558bd4d5db-lqdfg" in "kube-system" namespace to be "Ready" ...
	I0816 22:29:49.485993  278507 pod_ready.go:78] waiting up to 5m0s for pod "coredns-558bd4d5db-rgqln" in "kube-system" namespace to be "Ready" ...
	I0816 22:29:49.487963  278507 pod_ready.go:97] error getting pod "coredns-558bd4d5db-rgqln" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-rgqln" not found
	I0816 22:29:49.487984  278507 pod_ready.go:81] duration metric: took 1.983119ms waiting for pod "coredns-558bd4d5db-rgqln" in "kube-system" namespace to be "Ready" ...
	E0816 22:29:49.487995  278507 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-rgqln" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-rgqln" not found
	I0816 22:29:49.488004  278507 pod_ready.go:78] waiting up to 5m0s for pod "etcd-auto-20210816221527-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:29:49.491762  278507 pod_ready.go:92] pod "etcd-auto-20210816221527-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:29:49.491778  278507 pod_ready.go:81] duration metric: took 3.767279ms waiting for pod "etcd-auto-20210816221527-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:29:49.491789  278507 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-auto-20210816221527-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:29:49.495350  278507 pod_ready.go:92] pod "kube-apiserver-auto-20210816221527-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:29:49.495365  278507 pod_ready.go:81] duration metric: took 3.570189ms waiting for pod "kube-apiserver-auto-20210816221527-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:29:49.495376  278507 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-auto-20210816221527-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:29:49.498959  278507 pod_ready.go:92] pod "kube-controller-manager-auto-20210816221527-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:29:49.498975  278507 pod_ready.go:81] duration metric: took 3.591015ms waiting for pod "kube-controller-manager-auto-20210816221527-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:29:49.498985  278507 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-g5xtg" in "kube-system" namespace to be "Ready" ...
	I0816 22:29:49.683554  278507 pod_ready.go:92] pod "kube-proxy-g5xtg" in "kube-system" namespace has status "Ready":"True"
	I0816 22:29:49.683571  278507 pod_ready.go:81] duration metric: took 184.579395ms waiting for pod "kube-proxy-g5xtg" in "kube-system" namespace to be "Ready" ...
	I0816 22:29:49.683581  278507 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-auto-20210816221527-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:29:50.083358  278507 pod_ready.go:92] pod "kube-scheduler-auto-20210816221527-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:29:50.083378  278507 pod_ready.go:81] duration metric: took 399.790361ms waiting for pod "kube-scheduler-auto-20210816221527-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:29:50.083385  278507 pod_ready.go:38] duration metric: took 47.159603497s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:29:50.083401  278507 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:29:50.083438  278507 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:29:50.104045  278507 api_server.go:70] duration metric: took 47.361795354s to wait for apiserver process to appear ...
	I0816 22:29:50.104067  278507 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:29:50.104075  278507 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 22:29:50.108450  278507 api_server.go:265] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0816 22:29:50.109235  278507 api_server.go:139] control plane version: v1.21.3
	I0816 22:29:50.109252  278507 api_server.go:129] duration metric: took 5.180929ms to wait for apiserver health ...
	I0816 22:29:50.109260  278507 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:29:50.286231  278507 system_pods.go:59] 8 kube-system pods found
	I0816 22:29:50.286265  278507 system_pods.go:61] "coredns-558bd4d5db-lqdfg" [a519b1db-e452-465c-a48a-f83adf8dfb4e] Running
	I0816 22:29:50.286270  278507 system_pods.go:61] "etcd-auto-20210816221527-6487" [9117541b-bdff-41b9-8109-895d71c38a6a] Running
	I0816 22:29:50.286275  278507 system_pods.go:61] "kindnet-rt9ss" [7bd068cb-0560-4489-a2e3-67ba239b58ee] Running
	I0816 22:29:50.286278  278507 system_pods.go:61] "kube-apiserver-auto-20210816221527-6487" [5d461789-790d-4517-b616-d37bfed95bf5] Running
	I0816 22:29:50.286282  278507 system_pods.go:61] "kube-controller-manager-auto-20210816221527-6487" [544c1379-bb6d-4b29-943f-9eca65f3561a] Running
	I0816 22:29:50.286286  278507 system_pods.go:61] "kube-proxy-g5xtg" [a148aeb7-daea-4117-a3f2-47814c88ede8] Running
	I0816 22:29:50.286292  278507 system_pods.go:61] "kube-scheduler-auto-20210816221527-6487" [d922e6e2-c956-4dbe-b1af-b55c75a7639a] Running
	I0816 22:29:50.286297  278507 system_pods.go:61] "storage-provisioner" [4496f9dc-8418-43dc-b4f3-19420b7ecda4] Running
	I0816 22:29:50.286304  278507 system_pods.go:74] duration metric: took 177.039177ms to wait for pod list to return data ...
	I0816 22:29:50.286313  278507 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:29:50.484456  278507 default_sa.go:45] found service account: "default"
	I0816 22:29:50.484480  278507 default_sa.go:55] duration metric: took 198.159002ms for default service account to be created ...
	I0816 22:29:50.484489  278507 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:29:50.685709  278507 system_pods.go:86] 8 kube-system pods found
	I0816 22:29:50.685733  278507 system_pods.go:89] "coredns-558bd4d5db-lqdfg" [a519b1db-e452-465c-a48a-f83adf8dfb4e] Running
	I0816 22:29:50.685738  278507 system_pods.go:89] "etcd-auto-20210816221527-6487" [9117541b-bdff-41b9-8109-895d71c38a6a] Running
	I0816 22:29:50.685742  278507 system_pods.go:89] "kindnet-rt9ss" [7bd068cb-0560-4489-a2e3-67ba239b58ee] Running
	I0816 22:29:50.685747  278507 system_pods.go:89] "kube-apiserver-auto-20210816221527-6487" [5d461789-790d-4517-b616-d37bfed95bf5] Running
	I0816 22:29:50.685751  278507 system_pods.go:89] "kube-controller-manager-auto-20210816221527-6487" [544c1379-bb6d-4b29-943f-9eca65f3561a] Running
	I0816 22:29:50.685754  278507 system_pods.go:89] "kube-proxy-g5xtg" [a148aeb7-daea-4117-a3f2-47814c88ede8] Running
	I0816 22:29:50.685758  278507 system_pods.go:89] "kube-scheduler-auto-20210816221527-6487" [d922e6e2-c956-4dbe-b1af-b55c75a7639a] Running
	I0816 22:29:50.685762  278507 system_pods.go:89] "storage-provisioner" [4496f9dc-8418-43dc-b4f3-19420b7ecda4] Running
	I0816 22:29:50.685767  278507 system_pods.go:126] duration metric: took 201.274285ms to wait for k8s-apps to be running ...
	I0816 22:29:50.685778  278507 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:29:50.685815  278507 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:29:50.694975  278507 system_svc.go:56] duration metric: took 9.190037ms WaitForService to wait for kubelet.
	I0816 22:29:50.694994  278507 kubeadm.go:547] duration metric: took 47.952749622s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:29:50.695020  278507 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:29:50.885084  278507 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:29:50.885119  278507 node_conditions.go:123] node cpu capacity is 8
	I0816 22:29:50.885136  278507 node_conditions.go:105] duration metric: took 190.106459ms to run NodePressure ...
	I0816 22:29:50.885163  278507 start.go:231] waiting for startup goroutines ...
	I0816 22:29:50.928492  278507 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0816 22:29:50.931243  278507 out.go:177] * Done! kubectl is now configured to use "auto-20210816221527-6487" cluster and "default" namespace by default
	I0816 22:29:46.681994  287041 pod_ready.go:102] pod "coredns-558bd4d5db-9qpwq" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:49.181514  287041 pod_ready.go:102] pod "coredns-558bd4d5db-9qpwq" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:46.556165  280208 pod_ready.go:102] pod "coredns-558bd4d5db-gkxqs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:48.556302  280208 pod_ready.go:102] pod "coredns-558bd4d5db-gkxqs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:50.556463  280208 pod_ready.go:102] pod "coredns-558bd4d5db-gkxqs" in "kube-system" namespace has status "Ready":"False"
	I0816 22:29:52.556242  280208 pod_ready.go:92] pod "coredns-558bd4d5db-gkxqs" in "kube-system" namespace has status "Ready":"True"
	I0816 22:29:52.556266  280208 pod_ready.go:81] duration metric: took 50.019060236s waiting for pod "coredns-558bd4d5db-gkxqs" in "kube-system" namespace to be "Ready" ...
	I0816 22:29:52.556278  280208 pod_ready.go:78] waiting up to 5m0s for pod "etcd-kindnet-20210816221528-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:29:52.560059  280208 pod_ready.go:92] pod "etcd-kindnet-20210816221528-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:29:52.560081  280208 pod_ready.go:81] duration metric: took 3.79508ms waiting for pod "etcd-kindnet-20210816221528-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:29:52.560092  280208 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-kindnet-20210816221528-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:29:52.563815  280208 pod_ready.go:92] pod "kube-apiserver-kindnet-20210816221528-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:29:52.563830  280208 pod_ready.go:81] duration metric: took 3.73183ms waiting for pod "kube-apiserver-kindnet-20210816221528-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:29:52.563839  280208 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-kindnet-20210816221528-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:29:52.567209  280208 pod_ready.go:92] pod "kube-controller-manager-kindnet-20210816221528-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:29:52.567230  280208 pod_ready.go:81] duration metric: took 3.384454ms waiting for pod "kube-controller-manager-kindnet-20210816221528-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:29:52.567243  280208 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-45jk5" in "kube-system" namespace to be "Ready" ...
	I0816 22:29:52.570888  280208 pod_ready.go:92] pod "kube-proxy-45jk5" in "kube-system" namespace has status "Ready":"True"
	I0816 22:29:52.570903  280208 pod_ready.go:81] duration metric: took 3.653573ms waiting for pod "kube-proxy-45jk5" in "kube-system" namespace to be "Ready" ...
	I0816 22:29:52.570911  280208 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-kindnet-20210816221528-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:29:52.955163  280208 pod_ready.go:92] pod "kube-scheduler-kindnet-20210816221528-6487" in "kube-system" namespace has status "Ready":"True"
	I0816 22:29:52.955182  280208 pod_ready.go:81] duration metric: took 384.263943ms waiting for pod "kube-scheduler-kindnet-20210816221528-6487" in "kube-system" namespace to be "Ready" ...
	I0816 22:29:52.955191  280208 pod_ready.go:38] duration metric: took 50.430052174s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:29:52.955207  280208 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:29:52.955251  280208 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:29:52.978824  280208 api_server.go:70] duration metric: took 50.585592763s to wait for apiserver process to appear ...
	I0816 22:29:52.978847  280208 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:29:52.978857  280208 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0816 22:29:52.983216  280208 api_server.go:265] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0816 22:29:52.984059  280208 api_server.go:139] control plane version: v1.21.3
	I0816 22:29:52.984079  280208 api_server.go:129] duration metric: took 5.226526ms to wait for apiserver health ...
	I0816 22:29:52.984088  280208 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:29:53.161860  280208 system_pods.go:59] 8 kube-system pods found
	I0816 22:29:53.161889  280208 system_pods.go:61] "coredns-558bd4d5db-gkxqs" [9a11bb49-4aa1-4e7d-b0b4-1f5b0effd6dd] Running
	I0816 22:29:53.161897  280208 system_pods.go:61] "etcd-kindnet-20210816221528-6487" [e3bc3541-b8ad-4108-b3f2-e6c837346ed0] Running
	I0816 22:29:53.161903  280208 system_pods.go:61] "kindnet-c7m6z" [4f09a859-acb9-4b08-82db-31c22870e25d] Running
	I0816 22:29:53.161933  280208 system_pods.go:61] "kube-apiserver-kindnet-20210816221528-6487" [509aa17c-eb63-4c2b-8f00-4a4ad4b83d6b] Running
	I0816 22:29:53.161940  280208 system_pods.go:61] "kube-controller-manager-kindnet-20210816221528-6487" [1b7599f9-87fb-4240-92eb-6d219307b681] Running
	I0816 22:29:53.161953  280208 system_pods.go:61] "kube-proxy-45jk5" [4bbcfe24-1dcd-462f-b835-58071bfaf215] Running
	I0816 22:29:53.161965  280208 system_pods.go:61] "kube-scheduler-kindnet-20210816221528-6487" [c1b1d294-b565-416d-bc8b-024fdfc9af52] Running
	I0816 22:29:53.161975  280208 system_pods.go:61] "storage-provisioner" [fa626354-578c-488b-9401-410e02a740ca] Running
	I0816 22:29:53.161984  280208 system_pods.go:74] duration metric: took 177.890728ms to wait for pod list to return data ...
	I0816 22:29:53.161997  280208 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:29:53.355748  280208 default_sa.go:45] found service account: "default"
	I0816 22:29:53.355773  280208 default_sa.go:55] duration metric: took 193.771249ms for default service account to be created ...
	I0816 22:29:53.355783  280208 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:29:53.566111  280208 system_pods.go:86] 8 kube-system pods found
	I0816 22:29:53.566159  280208 system_pods.go:89] "coredns-558bd4d5db-gkxqs" [9a11bb49-4aa1-4e7d-b0b4-1f5b0effd6dd] Running
	I0816 22:29:53.566168  280208 system_pods.go:89] "etcd-kindnet-20210816221528-6487" [e3bc3541-b8ad-4108-b3f2-e6c837346ed0] Running
	I0816 22:29:53.566179  280208 system_pods.go:89] "kindnet-c7m6z" [4f09a859-acb9-4b08-82db-31c22870e25d] Running
	I0816 22:29:53.566185  280208 system_pods.go:89] "kube-apiserver-kindnet-20210816221528-6487" [509aa17c-eb63-4c2b-8f00-4a4ad4b83d6b] Running
	I0816 22:29:53.566195  280208 system_pods.go:89] "kube-controller-manager-kindnet-20210816221528-6487" [1b7599f9-87fb-4240-92eb-6d219307b681] Running
	I0816 22:29:53.566201  280208 system_pods.go:89] "kube-proxy-45jk5" [4bbcfe24-1dcd-462f-b835-58071bfaf215] Running
	I0816 22:29:53.566211  280208 system_pods.go:89] "kube-scheduler-kindnet-20210816221528-6487" [c1b1d294-b565-416d-bc8b-024fdfc9af52] Running
	I0816 22:29:53.566219  280208 system_pods.go:89] "storage-provisioner" [fa626354-578c-488b-9401-410e02a740ca] Running
	I0816 22:29:53.566231  280208 system_pods.go:126] duration metric: took 210.442045ms to wait for k8s-apps to be running ...
	I0816 22:29:53.566241  280208 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:29:53.566286  280208 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:29:53.577304  280208 system_svc.go:56] duration metric: took 11.056853ms WaitForService to wait for kubelet.
	I0816 22:29:53.577328  280208 kubeadm.go:547] duration metric: took 51.184099248s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:29:53.577351  280208 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:29:53.755169  280208 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0816 22:29:53.755215  280208 node_conditions.go:123] node cpu capacity is 8
	I0816 22:29:53.755228  280208 node_conditions.go:105] duration metric: took 177.870994ms to run NodePressure ...
	I0816 22:29:53.755240  280208 start.go:231] waiting for startup goroutines ...
	I0816 22:29:53.805420  280208 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0816 22:29:53.807832  280208 out.go:177] * Done! kubectl is now configured to use "kindnet-20210816221528-6487" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Mon 2021-08-16 22:17:44 UTC, end at Mon 2021-08-16 22:29:54 UTC. --
	Aug 16 22:27:12 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:27:12.544160061Z" level=info msg="Created container 29cb6abc91eccce9c1ed3060df0bcf1712166b848ba6112f56dfa0a6c8b150a0: kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544-p56jc/dashboard-metrics-scraper" id=df17ca10-e14f-4270-9be8-3975eceb9917 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 16 22:27:12 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:27:12.544686200Z" level=info msg="Starting container: 29cb6abc91eccce9c1ed3060df0bcf1712166b848ba6112f56dfa0a6c8b150a0" id=d1f19a0d-33a3-4c6a-b2ee-15791116f8c5 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:27:12 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:27:12.567463047Z" level=info msg="Started container 29cb6abc91eccce9c1ed3060df0bcf1712166b848ba6112f56dfa0a6c8b150a0: kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544-p56jc/dashboard-metrics-scraper" id=d1f19a0d-33a3-4c6a-b2ee-15791116f8c5 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 16 22:27:12 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:27:12.883456974Z" level=info msg="Removing container: e231f4ecb120e2248b78aaf7c05cd9130cf1461fb487af9bacb3bc406eebdc4d" id=20d711e9-7a38-4a06-9f76-2db82f05e3eb name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 16 22:27:12 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:27:12.916469589Z" level=info msg="Removed container e231f4ecb120e2248b78aaf7c05cd9130cf1461fb487af9bacb3bc406eebdc4d: kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544-p56jc/dashboard-metrics-scraper" id=20d711e9-7a38-4a06-9f76-2db82f05e3eb name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 16 22:27:21 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:27:21.379892700Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=d12524f4-c44d-453c-9dfa-0a78bd09861a name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:27:21 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:27:21.380232669Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=d12524f4-c44d-453c-9dfa-0a78bd09861a name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:27:36 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:27:36.379503105Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=e92d982f-6662-48d8-84e5-dc2b89863576 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:27:36 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:27:36.379800289Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=e92d982f-6662-48d8-84e5-dc2b89863576 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:27:51 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:27:51.379848039Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=df8199cc-8485-4311-8fc7-c3ba81d3bfe8 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:27:51 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:27:51.380134943Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=df8199cc-8485-4311-8fc7-c3ba81d3bfe8 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:28:01 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:28:01.384822714Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.1" id=89881e64-9cb1-4d6e-812a-db638305cb11 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:28:01 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:28:01.385540332Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e,RepoTags:[k8s.gcr.io/pause:3.1],RepoDigests:[k8s.gcr.io/pause@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea],Size_:748776,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=89881e64-9cb1-4d6e-812a-db638305cb11 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:28:05 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:28:05.379606364Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=8e62c339-85b2-4dbe-8efd-c34a5a50036a name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:28:05 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:28:05.379880035Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=8e62c339-85b2-4dbe-8efd-c34a5a50036a name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:28:17 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:28:17.379536429Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=f3cbe963-184a-4e9c-8ae6-e46198c7eb81 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:28:17 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:28:17.379832866Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=f3cbe963-184a-4e9c-8ae6-e46198c7eb81 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:28:28 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:28:28.379581420Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=65090874-5c43-411d-ba16-d232e258ecdb name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:28:28 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:28:28.379792470Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=65090874-5c43-411d-ba16-d232e258ecdb name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:28:40 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:28:40.379761150Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=caa2ed8f-6cac-491b-9f12-1a95facf31f3 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:28:40 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:28:40.380055032Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=caa2ed8f-6cac-491b-9f12-1a95facf31f3 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:28:52 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:28:52.379504736Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=7eb3ace6-d9f8-4da5-b9ab-4168d304ce45 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:28:52 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:28:52.379710341Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=7eb3ace6-d9f8-4da5-b9ab-4168d304ce45 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:29:06 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:29:06.379679984Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=47a43c89-6b29-42de-96bd-f2810315ec63 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 16 22:29:06 old-k8s-version-20210816221528-6487 crio[243]: time="2021-08-16 22:29:06.379970936Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=47a43c89-6b29-42de-96bd-f2810315ec63 name=/runtime.v1alpha2.ImageService/ImageStatus
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID
	29cb6abc91ecc       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   2 minutes ago       Exited              dashboard-metrics-scraper   5                   a76d338eee6de
	10ed0c559670b       eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c   5 minutes ago       Running             coredns                     0                   a75c25d2d9fd6
	9824eba2c3288       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   5 minutes ago       Exited              storage-provisioner         0                   2aeef213ad07d
	fc1a6c3255410       9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db   5 minutes ago       Running             kubernetes-dashboard        0                   fb7f0589487bb
	ee5a79b4037bd       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb   6 minutes ago       Running             kindnet-cni                 0                   268365ea989a7
	573ba7ae7e940       5cd54e388abafbc4e1feb1050d139d718e5544494ffa55118141d6cbe4681e9d   6 minutes ago       Running             kube-proxy                  0                   3ee8752e7a891
	68efe63d2b18a       2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d   6 minutes ago       Running             etcd                        0                   545c9d2ab1fb0
	39eab1fff2a03       b95b1efa0436be0942d09e035a099542787d0a32d23cda704bd3e84760d3d150   6 minutes ago       Running             kube-controller-manager     0                   2e33cbc445de2
	5d9a6699a0827       00638a24688b0ccaebac56206e4b7e6c529cb6807e1c30700e6f3489b59a4492   6 minutes ago       Running             kube-scheduler              0                   99077c4379571
	1646719043afc       ecf910f40d6e04e02f9da936745fdfdb455122df78e0ec3dc13c7a2eaa5191e6   6 minutes ago       Running             kube-apiserver              0                   3e22a704d7145
	
	* 
	* ==> coredns [10ed0c559670bc837ba359f0311f63a6421f80088a63de7509a9ec51ec991904] <==
	* .:53
	2021-08-16T22:24:12.962Z [INFO] CoreDNS-1.3.1
	2021-08-16T22:24:12.962Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	CoreDNS-1.3.1
	linux/amd64, go1.11.4, 6b56a9c
	2021-08-16T22:24:12.962Z [INFO] plugin/reload: Running configuration MD5 = 84554e3bcd896bd44d28b54cbac27490
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000004] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev eth0
	[  +0.000000] ll header: 00000000: ff ff ff ff ff ff fa eb a1 ec 39 af 08 06        ..........9...
	[  +0.004280] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff be 23 46 8c 4d d6 08 06        .......#F.M...
	[  +0.403009] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff fa eb a1 ec 39 af 08 06        ..........9...
	[  +0.026417] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff be 23 46 8c 4d d6 08 06        .......#F.M...
	[  +5.266307] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth304ddcac
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 12 2f b9 81 b0 0d 08 06        ......./......
	[  +2.687875] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethb4a2a423
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff b2 2b 41 97 91 1e 08 06        .......+A.....
	[  +0.983719] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth194e2de4
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 66 49 8f a1 9e ad 08 06        ......fI......
	[  +3.416215] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff f6 6a 03 48 83 b1 08 06        .......j.H....
	[  +4.303351] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth3796ef2c
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff f2 24 7d 3f 7d 89 08 06        .......$}?}...
	[Aug16 22:30] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff f6 6a 03 48 83 b1 08 06        .......j.H....
	[  +0.000316] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff be 23 46 8c 4d d6 08 06        .......#F.M...
	[  +2.954512] cgroup: cgroup2: unknown option "nsdelegate"
	[  +3.504862] cgroup: cgroup2: unknown option "nsdelegate"
	[  +4.980715] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [68efe63d2b18a4657b5d62078100ef1b193a339d0b486472b1c85f1d4189e4ff] <==
	* 2021-08-16 22:25:16.692067 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (1.062929436s) to execute
	2021-08-16 22:25:16.692145 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-8546d8b77b-pb7tf\" " with result "range_response_count:1 size:1956" took too long (1.311614746s) to execute
	2021-08-16 22:25:16.692185 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/metrics-server-8546d8b77b-pb7tf.169be9b4f48c19b7\" " with result "range_response_count:1 size:511" took too long (1.309904831s) to execute
	2021-08-16 22:25:17.338805 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-20210816221528-6487\" " with result "range_response_count:1 size:5050" took too long (643.942509ms) to execute
	2021-08-16 22:25:17.348517 W | etcdserver: read-only range request "key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (653.599843ms) to execute
	2021-08-16 22:28:52.039410 W | etcdserver: request "header:<ID:3238505195140492486 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.58.2\" mod_revision:796 > success:<request_put:<key:\"/registry/masterleases/192.168.58.2\" value_size:67 lease:3238505195140492484 >> failure:<request_range:<key:\"/registry/masterleases/192.168.58.2\" > >>" with result "size:16" took too long (1.687475809s) to execute
	2021-08-16 22:28:52.039530 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:5050" took too long (2.374963136s) to execute
	2021-08-16 22:28:52.188160 W | wal: sync duration of 1.836345842s, expected less than 1s
	2021-08-16 22:28:52.309878 W | etcdserver: read-only range request "key:\"/registry/volumeattachments\" range_end:\"/registry/volumeattachmentt\" count_only:true " with result "range_response_count:0 size:5" took too long (2.198421725s) to execute
	2021-08-16 22:28:52.309919 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:766" took too long (2.360089157s) to execute
	2021-08-16 22:28:52.310055 W | etcdserver: read-only range request "key:\"/registry/replicasets\" range_end:\"/registry/replicasett\" count_only:true " with result "range_response_count:0 size:7" took too long (1.288274379s) to execute
	2021-08-16 22:28:52.310147 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:215" took too long (269.126412ms) to execute
	2021-08-16 22:28:52.310926 W | etcdserver: read-only range request "key:\"/registry/storageclasses\" range_end:\"/registry/storageclasset\" count_only:true " with result "range_response_count:0 size:7" took too long (475.426002ms) to execute
	2021-08-16 22:28:57.446583 W | wal: sync duration of 2.412274702s, expected less than 1s
	2021-08-16 22:28:57.937853 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:766" took too long (1.615747157s) to execute
	2021-08-16 22:28:57.937890 W | etcdserver: read-only range request "key:\"/registry/deployments\" range_end:\"/registry/deploymentt\" count_only:true " with result "range_response_count:0 size:7" took too long (2.361401369s) to execute
	2021-08-16 22:28:57.937944 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:9 size:20520" took too long (1.742253642s) to execute
	2021-08-16 22:28:57.938083 W | etcdserver: read-only range request "key:\"foo\" " with result "range_response_count:0 size:5" took too long (2.512479783s) to execute
	2021-08-16 22:28:57.938092 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.800163452s) to execute
	2021-08-16 22:28:57.938201 W | etcdserver: read-only range request "key:\"/registry/podsecuritypolicy\" range_end:\"/registry/podsecuritypolicz\" count_only:true " with result "range_response_count:0 size:5" took too long (1.559935972s) to execute
	2021-08-16 22:28:57.938286 W | etcdserver: read-only range request "key:\"/registry/daemonsets\" range_end:\"/registry/daemonsett\" count_only:true " with result "range_response_count:0 size:7" took too long (495.308538ms) to execute
	2021-08-16 22:28:57.938299 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/old-k8s-version-20210816221528-6487\" " with result "range_response_count:1 size:396" took too long (498.586634ms) to execute
	2021-08-16 22:28:57.938419 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (496.635016ms) to execute
	2021-08-16 22:28:57.938440 W | etcdserver: read-only range request "key:\"/registry/certificatesigningrequests\" range_end:\"/registry/certificatesigningrequestt\" count_only:true " with result "range_response_count:0 size:7" took too long (195.783788ms) to execute
	2021-08-16 22:28:59.163254 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings\" range_end:\"/registry/clusterrolebindingt\" count_only:true " with result "range_response_count:0 size:7" took too long (683.488234ms) to execute
	
	* 
	* ==> kernel <==
	*  22:30:34 up  1:10,  0 users,  load average: 3.76, 2.90, 2.46
	Linux old-k8s-version-20210816221528-6487 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [1646719043afc023ec9a9c6e546a9e5e1fa4a04854ab10ce7530b9bbe1c06030] <==
	* I0816 22:29:02.091739       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0816 22:29:02.091877       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0816 22:29:03.092046       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0816 22:29:03.092166       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0816 22:29:04.093196       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0816 22:29:04.093319       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0816 22:29:05.093489       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0816 22:29:05.093601       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0816 22:29:06.093800       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0816 22:29:06.093933       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0816 22:29:07.094099       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0816 22:29:07.094202       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0816 22:29:07.232773       1 controller.go:102] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0816 22:29:07.232851       1 handler_proxy.go:89] no RequestInfo found in the context
	E0816 22:29:07.232931       1 controller.go:108] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0816 22:29:07.232946       1 controller.go:121] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 22:29:08.094359       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0816 22:29:08.094471       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0816 22:29:09.094647       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0816 22:29:09.094756       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0816 22:29:10.094915       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0816 22:29:10.095030       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0816 22:29:11.095236       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0816 22:29:11.095365       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	
	* 
	* ==> kube-controller-manager [39eab1fff2a03f36068c707a1a5ae682543a0f87a9d27daeb773edb072c84571] <==
	* I0816 22:23:29.614680       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"9527ed20-fee0-11eb-938e-0242c0a83a02", APIVersion:"apps/v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-5b494cc544-p56jc
	I0816 22:23:30.035943       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-8546d8b77b", UID:"94e8fa5e-fee0-11eb-938e-0242c0a83a02", APIVersion:"apps/v1", ResourceVersion:"372", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-8546d8b77b-pb7tf
	I0816 22:23:30.534019       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"9531ed2c-fee0-11eb-938e-0242c0a83a02", APIVersion:"apps/v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-5d8978d65d-kvm5k
	E0816 22:23:57.137720       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:23:59.697881       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	I0816 22:24:07.712109       1 node_lifecycle_controller.go:1036] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	E0816 22:24:27.389213       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:24:31.699611       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 22:24:57.640749       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:25:03.700952       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 22:25:27.892003       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:25:35.702252       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 22:25:58.143378       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:26:07.703679       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 22:26:28.394912       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:26:39.704940       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 22:26:58.646055       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:27:11.706228       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 22:27:28.897456       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:27:43.707468       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 22:27:59.148732       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:28:15.709156       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 22:28:29.400057       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:28:47.711036       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 22:28:59.651739       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [573ba7ae7e9400419eedaf1a8a703ea83fd88f346bc0926601b8ced182e07bed] <==
	* W0816 22:23:29.142809       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
	I0816 22:23:29.225256       1 server_others.go:148] Using iptables Proxier.
	I0816 22:23:29.228234       1 server_others.go:178] Tearing down inactive rules.
	E0816 22:23:30.330850       1 proxier.go:583] Error removing iptables rules in ipvs proxier: error deleting chain "KUBE-MARK-MASQ": exit status 1: iptables: Too many links.
	I0816 22:23:30.724242       1 server.go:555] Version: v1.14.0
	I0816 22:23:30.728326       1 config.go:202] Starting service config controller
	I0816 22:23:30.728352       1 config.go:102] Starting endpoints config controller
	I0816 22:23:30.728372       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
	I0816 22:23:30.728352       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
	I0816 22:23:30.828526       1 controller_utils.go:1034] Caches are synced for endpoints config controller
	I0816 22:23:30.828674       1 controller_utils.go:1034] Caches are synced for service config controller
	
	* 
	* ==> kube-scheduler [5d9a6699a082709279178dd0fcfe86839cc48019194dd5952cc13c71fe9474db] <==
	* W0816 22:23:04.336333       1 authentication.go:55] Authentication is disabled
	I0816 22:23:04.336348       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
	I0816 22:23:04.336686       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
	E0816 22:23:06.199512       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:23:06.223725       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:06.223881       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:23:06.232049       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:23:06.232284       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:23:06.232338       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:23:06.232518       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:23:06.233704       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:23:06.233755       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:23:06.238769       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:23:07.200609       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:23:07.224766       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:07.227652       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:23:07.232982       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:23:07.234073       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:23:07.235099       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:23:07.236257       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:23:07.237225       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:23:07.238359       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:23:07.239562       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0816 22:23:09.037938       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
	I0816 22:23:09.138108       1 controller_utils.go:1034] Caches are synced for scheduler controller
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:17:44 UTC, end at Mon 2021-08-16 22:30:34 UTC. --
	Aug 16 22:27:09 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:27:09.394942    5012 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host
	Aug 16 22:27:09 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:27:09.394978    5012 pod_workers.go:190] Error syncing pod 95908c73-fee0-11eb-938e-0242c0a83a02 ("metrics-server-8546d8b77b-pb7tf_kube-system(95908c73-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host"
	Aug 16 22:27:12 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:27:12.883437    5012 pod_workers.go:190] Error syncing pod 9547f199-fee0-11eb-938e-0242c0a83a02 ("dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"
	Aug 16 22:27:21 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:27:21.380456    5012 pod_workers.go:190] Error syncing pod 95908c73-fee0-11eb-938e-0242c0a83a02 ("metrics-server-8546d8b77b-pb7tf_kube-system(95908c73-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:27:22 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:27:22.420359    5012 pod_workers.go:190] Error syncing pod 9547f199-fee0-11eb-938e-0242c0a83a02 ("dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"
	Aug 16 22:27:32 old-k8s-version-20210816221528-6487 kubelet[5012]: W0816 22:27:32.050898    5012 conversion.go:110] Could not get instant cpu stats: cumulative stats decrease
	Aug 16 22:27:36 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:27:36.380124    5012 pod_workers.go:190] Error syncing pod 95908c73-fee0-11eb-938e-0242c0a83a02 ("metrics-server-8546d8b77b-pb7tf_kube-system(95908c73-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:27:37 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:27:37.379327    5012 pod_workers.go:190] Error syncing pod 9547f199-fee0-11eb-938e-0242c0a83a02 ("dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"
	Aug 16 22:27:49 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:27:49.379455    5012 pod_workers.go:190] Error syncing pod 9547f199-fee0-11eb-938e-0242c0a83a02 ("dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"
	Aug 16 22:27:51 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:27:51.380376    5012 pod_workers.go:190] Error syncing pod 95908c73-fee0-11eb-938e-0242c0a83a02 ("metrics-server-8546d8b77b-pb7tf_kube-system(95908c73-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:28:02 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:28:02.379305    5012 pod_workers.go:190] Error syncing pod 9547f199-fee0-11eb-938e-0242c0a83a02 ("dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"
	Aug 16 22:28:05 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:28:05.380126    5012 pod_workers.go:190] Error syncing pod 95908c73-fee0-11eb-938e-0242c0a83a02 ("metrics-server-8546d8b77b-pb7tf_kube-system(95908c73-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:28:16 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:28:16.379413    5012 pod_workers.go:190] Error syncing pod 9547f199-fee0-11eb-938e-0242c0a83a02 ("dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"
	Aug 16 22:28:17 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:28:17.380084    5012 pod_workers.go:190] Error syncing pod 95908c73-fee0-11eb-938e-0242c0a83a02 ("metrics-server-8546d8b77b-pb7tf_kube-system(95908c73-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:28:27 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:28:27.379418    5012 pod_workers.go:190] Error syncing pod 9547f199-fee0-11eb-938e-0242c0a83a02 ("dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"
	Aug 16 22:28:28 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:28:28.380054    5012 pod_workers.go:190] Error syncing pod 95908c73-fee0-11eb-938e-0242c0a83a02 ("metrics-server-8546d8b77b-pb7tf_kube-system(95908c73-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:28:40 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:28:40.379442    5012 pod_workers.go:190] Error syncing pod 9547f199-fee0-11eb-938e-0242c0a83a02 ("dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"
	Aug 16 22:28:40 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:28:40.380292    5012 pod_workers.go:190] Error syncing pod 95908c73-fee0-11eb-938e-0242c0a83a02 ("metrics-server-8546d8b77b-pb7tf_kube-system(95908c73-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:28:52 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:28:52.379988    5012 pod_workers.go:190] Error syncing pod 95908c73-fee0-11eb-938e-0242c0a83a02 ("metrics-server-8546d8b77b-pb7tf_kube-system(95908c73-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:28:53 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:28:53.379421    5012 pod_workers.go:190] Error syncing pod 9547f199-fee0-11eb-938e-0242c0a83a02 ("dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"
	Aug 16 22:29:06 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:29:06.379483    5012 pod_workers.go:190] Error syncing pod 9547f199-fee0-11eb-938e-0242c0a83a02 ("dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-p56jc_kubernetes-dashboard(9547f199-fee0-11eb-938e-0242c0a83a02)"
	Aug 16 22:29:06 old-k8s-version-20210816221528-6487 kubelet[5012]: E0816 22:29:06.380250    5012 pod_workers.go:190] Error syncing pod 95908c73-fee0-11eb-938e-0242c0a83a02 ("metrics-server-8546d8b77b-pb7tf_kube-system(95908c73-fee0-11eb-938e-0242c0a83a02)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:29:10 old-k8s-version-20210816221528-6487 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 16 22:29:10 old-k8s-version-20210816221528-6487 systemd[1]: kubelet.service: Succeeded.
	Aug 16 22:29:10 old-k8s-version-20210816221528-6487 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [fc1a6c3255410ca13e0379073ba0e17180576d92a0ea5b02a71aa3563c7f8f18] <==
	* 2021/08/16 22:24:07 Using namespace: kubernetes-dashboard
	2021/08/16 22:24:07 Using in-cluster config to connect to apiserver
	2021/08/16 22:24:07 Using secret token for csrf signing
	2021/08/16 22:24:07 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/16 22:24:07 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/16 22:24:07 Successful initial request to the apiserver, version: v1.14.0
	2021/08/16 22:24:07 Generating JWE encryption key
	2021/08/16 22:24:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/16 22:24:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/16 22:24:07 Initializing JWE encryption key from synchronized object
	2021/08/16 22:24:07 Creating in-cluster Sidecar client
	2021/08/16 22:24:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:24:07 Serving insecurely on HTTP port: 9090
	2021/08/16 22:24:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:25:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:25:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:26:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:26:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:27:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:27:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:28:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:28:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:29:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:29:52 Metric client health check failed: Get "https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/services/dashboard-metrics-scraper/proxy/healthz": http2: client connection lost. Retrying in 30 seconds.
	2021/08/16 22:30:32 Metric client health check failed: Get "https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/services/dashboard-metrics-scraper/proxy/healthz": net/http: TLS handshake timeout. Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [9824eba2c3288da7218f49c3b45afa0fc7d2164956ff5f942d0295c3756a728c] <==
	* 	/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	goroutine 114 [sync.Cond.Wait, 5 minutes]:
	sync.runtime_notifyListWait(0xc00032a2d0, 0x2)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc00032a2c0)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc0003722a0, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextClaimWorkItem(0xc0004acf00, 0x18e5530, 0xc00004a180, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:935 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runClaimWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:924
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.2()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0005e2500)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0005e2500, 0x18b3d60, 0xc0005e0b40, 0x1, 0xc000440300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0005e2500, 0x3b9aca00, 0x0, 0x1, 0xc000440300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc0005e2500, 0x3b9aca00, 0xc000440300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:30:34.502733  294181 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (84.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (358.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:162: (dbg) Run:  kubectl --context calico-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context calico-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (63.882634ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context calico-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context calico-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (62.863938ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
E0816 22:32:13.328614    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210816221528-6487/client.crt: no such file or directory
E0816 22:32:13.333876    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210816221528-6487/client.crt: no such file or directory
E0816 22:32:13.344121    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210816221528-6487/client.crt: no such file or directory
E0816 22:32:13.364357    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210816221528-6487/client.crt: no such file or directory
E0816 22:32:13.404598    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210816221528-6487/client.crt: no such file or directory
E0816 22:32:13.484898    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210816221528-6487/client.crt: no such file or directory
E0816 22:32:13.645238    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210816221528-6487/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context calico-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context calico-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (63.903002ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
E0816 22:32:13.966033    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210816221528-6487/client.crt: no such file or directory
E0816 22:32:14.606966    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210816221528-6487/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context calico-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context calico-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (64.384067ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
E0816 22:32:15.888140    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210816221528-6487/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context calico-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context calico-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (64.579809ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
E0816 22:32:18.448323    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210816221528-6487/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context calico-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context calico-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (125.748483ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
E0816 22:32:23.568492    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210816221528-6487/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context calico-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context calico-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (63.948133ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
E0816 22:32:33.809057    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210816221528-6487/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context calico-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context calico-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (67.71331ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context calico-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context calico-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (64.572101ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
E0816 22:32:51.830727    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816221555-6487/client.crt: no such file or directory
E0816 22:32:54.289787    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210816221528-6487/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context calico-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context calico-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (66.606078ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
E0816 22:33:16.416095    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816221939-6487/client.crt: no such file or directory
E0816 22:33:19.515459    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816221555-6487/client.crt: no such file or directory
E0816 22:33:20.659682    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816215050-6487/client.crt: no such file or directory
E0816 22:33:35.250737    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210816221528-6487/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context calico-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context calico-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (66.455506ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
E0816 22:34:11.451962    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context calico-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context calico-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (64.842399ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
E0816 22:34:51.615843    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816221527-6487/client.crt: no such file or directory
E0816 22:34:51.621142    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816221527-6487/client.crt: no such file or directory
E0816 22:34:51.631347    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816221527-6487/client.crt: no such file or directory
E0816 22:34:51.651568    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816221527-6487/client.crt: no such file or directory
E0816 22:34:51.691781    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816221527-6487/client.crt: no such file or directory
E0816 22:34:51.772044    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816221527-6487/client.crt: no such file or directory
E0816 22:34:51.932365    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816221527-6487/client.crt: no such file or directory
E0816 22:34:52.252920    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816221527-6487/client.crt: no such file or directory
E0816 22:34:52.893993    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816221527-6487/client.crt: no such file or directory
E0816 22:34:53.816190    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816221528-6487/client.crt: no such file or directory
E0816 22:34:53.821450    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816221528-6487/client.crt: no such file or directory
E0816 22:34:53.831680    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816221528-6487/client.crt: no such file or directory
E0816 22:34:53.851894    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816221528-6487/client.crt: no such file or directory
E0816 22:34:53.892105    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816221528-6487/client.crt: no such file or directory
E0816 22:34:53.972385    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816221528-6487/client.crt: no such file or directory
E0816 22:34:54.132895    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816221528-6487/client.crt: no such file or directory
E0816 22:34:54.175170    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816221527-6487/client.crt: no such file or directory
E0816 22:34:54.453559    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816221528-6487/client.crt: no such file or directory
E0816 22:34:55.027432    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/client.crt: no such file or directory
E0816 22:34:55.032701    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/client.crt: no such file or directory
E0816 22:34:55.042927    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/client.crt: no such file or directory
E0816 22:34:55.063140    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/client.crt: no such file or directory
E0816 22:34:55.094328    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816221528-6487/client.crt: no such file or directory
E0816 22:34:55.103511    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/client.crt: no such file or directory
E0816 22:34:55.183767    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/client.crt: no such file or directory
E0816 22:34:55.344146    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/client.crt: no such file or directory
E0816 22:34:55.664685    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/client.crt: no such file or directory
E0816 22:34:56.305531    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/client.crt: no such file or directory
E0816 22:34:56.374785    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816221528-6487/client.crt: no such file or directory
E0816 22:34:56.735287    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816221527-6487/client.crt: no such file or directory
E0816 22:34:57.170941    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210816221528-6487/client.crt: no such file or directory
E0816 22:34:57.585787    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/client.crt: no such file or directory
E0816 22:34:58.936026    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816221528-6487/client.crt: no such file or directory
E0816 22:35:00.146295    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/client.crt: no such file or directory
E0816 22:35:01.855785    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816221527-6487/client.crt: no such file or directory
E0816 22:35:04.056399    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816221528-6487/client.crt: no such file or directory
E0816 22:35:05.266961    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/client.crt: no such file or directory
E0816 22:35:12.096567    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816221527-6487/client.crt: no such file or directory
E0816 22:35:14.296919    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816221528-6487/client.crt: no such file or directory
E0816 22:35:15.507389    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/client.crt: no such file or directory
E0816 22:35:32.482425    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816221939-6487/client.crt: no such file or directory
E0816 22:35:32.576743    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816221527-6487/client.crt: no such file or directory
E0816 22:35:34.777429    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816221528-6487/client.crt: no such file or directory
E0816 22:35:35.987840    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context calico-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context calico-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (64.367063ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
E0816 22:36:00.257844    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816221939-6487/client.crt: no such file or directory
E0816 22:36:06.875630    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816221527-6487/client.crt: no such file or directory
E0816 22:36:06.880896    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816221527-6487/client.crt: no such file or directory
E0816 22:36:06.891114    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816221527-6487/client.crt: no such file or directory
E0816 22:36:06.911368    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816221527-6487/client.crt: no such file or directory
E0816 22:36:06.951593    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816221527-6487/client.crt: no such file or directory
E0816 22:36:07.031825    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816221527-6487/client.crt: no such file or directory
E0816 22:36:07.192189    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816221527-6487/client.crt: no such file or directory
E0816 22:36:07.512733    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816221527-6487/client.crt: no such file or directory
E0816 22:36:08.153778    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816221527-6487/client.crt: no such file or directory
E0816 22:36:09.434195    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816221527-6487/client.crt: no such file or directory
E0816 22:36:11.995350    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816221527-6487/client.crt: no such file or directory
E0816 22:36:13.540903    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816221527-6487/client.crt: no such file or directory
E0816 22:36:15.738602    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816221528-6487/client.crt: no such file or directory
E0816 22:36:16.948004    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/client.crt: no such file or directory
E0816 22:36:17.116293    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816221527-6487/client.crt: no such file or directory
E0816 22:36:24.617141    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816221528-6487/client.crt: no such file or directory
E0816 22:36:24.622394    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816221528-6487/client.crt: no such file or directory
E0816 22:36:24.632637    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816221528-6487/client.crt: no such file or directory
E0816 22:36:24.652888    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816221528-6487/client.crt: no such file or directory
E0816 22:36:24.693114    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816221528-6487/client.crt: no such file or directory
E0816 22:36:24.773384    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816221528-6487/client.crt: no such file or directory
E0816 22:36:24.933792    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816221528-6487/client.crt: no such file or directory
E0816 22:36:25.254416    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816221528-6487/client.crt: no such file or directory
E0816 22:36:25.895286    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816221528-6487/client.crt: no such file or directory
E0816 22:36:27.175679    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816221528-6487/client.crt: no such file or directory
E0816 22:36:27.357019    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816221527-6487/client.crt: no such file or directory
E0816 22:36:29.736269    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816221528-6487/client.crt: no such file or directory
E0816 22:36:31.357054    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816221528-6487/client.crt: no such file or directory
E0816 22:36:31.362298    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816221528-6487/client.crt: no such file or directory
E0816 22:36:31.372541    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816221528-6487/client.crt: no such file or directory
E0816 22:36:31.392788    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816221528-6487/client.crt: no such file or directory
E0816 22:36:31.433037    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816221528-6487/client.crt: no such file or directory
E0816 22:36:31.513371    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816221528-6487/client.crt: no such file or directory
E0816 22:36:31.673767    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816221528-6487/client.crt: no such file or directory
E0816 22:36:31.994335    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816221528-6487/client.crt: no such file or directory
E0816 22:36:32.635322    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816221528-6487/client.crt: no such file or directory
E0816 22:36:33.915851    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816221528-6487/client.crt: no such file or directory
E0816 22:36:34.857104    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816221528-6487/client.crt: no such file or directory
E0816 22:36:36.476904    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816221528-6487/client.crt: no such file or directory
E0816 22:36:41.597318    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816221528-6487/client.crt: no such file or directory
E0816 22:36:45.097804    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816221528-6487/client.crt: no such file or directory
E0816 22:36:47.837900    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816221527-6487/client.crt: no such file or directory
E0816 22:36:51.837860    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816221528-6487/client.crt: no such file or directory
E0816 22:37:05.578771    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816221528-6487/client.crt: no such file or directory
E0816 22:37:12.318435    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816221528-6487/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context calico-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context calico-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (63.962417ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
E0816 22:37:13.327943    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210816221528-6487/client.crt: no such file or directory
E0816 22:37:28.798098    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816221527-6487/client.crt: no such file or directory
E0816 22:37:35.461044    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816221527-6487/client.crt: no such file or directory
E0816 22:37:37.659641    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816221528-6487/client.crt: no such file or directory
E0816 22:37:38.869105    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816221527-6487/client.crt: no such file or directory
E0816 22:37:41.011640    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210816221528-6487/client.crt: no such file or directory
E0816 22:37:46.539838    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816221528-6487/client.crt: no such file or directory
E0816 22:37:51.830835    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816221555-6487/client.crt: no such file or directory
E0816 22:37:53.279604    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816221528-6487/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context calico-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context calico-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (68.489267ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
net_test.go:168: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:173: failed nslookup: got="", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/calico/DNS (358.55s)

                                                
                                    

Test pass (223/262)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 5.86
4 TestDownloadOnly/v1.14.0/preload-exists 0
8 TestDownloadOnly/v1.14.0/LogsDuration 0.06
10 TestDownloadOnly/v1.21.3/json-events 8.13
11 TestDownloadOnly/v1.21.3/preload-exists 0
15 TestDownloadOnly/v1.21.3/LogsDuration 0.06
17 TestDownloadOnly/v1.22.0-rc.0/json-events 6.67
18 TestDownloadOnly/v1.22.0-rc.0/preload-exists 0
22 TestDownloadOnly/v1.22.0-rc.0/LogsDuration 0.06
23 TestDownloadOnly/DeleteAll 0.36
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.22
25 TestDownloadOnlyKic 7.95
26 TestOffline 103.65
29 TestAddons/parallel/Registry 13.2
31 TestAddons/parallel/MetricsServer 5.68
32 TestAddons/parallel/HelmTiller 10.17
33 TestAddons/parallel/Olm 69.43
34 TestAddons/parallel/CSI 49.51
35 TestAddons/parallel/GCPAuth 47.01
36 TestCertOptions 228.17
38 TestForceSystemdFlag 38.77
39 TestForceSystemdEnv 34.28
40 TestKVMDriverInstallOrUpdate 1.88
44 TestErrorSpam/setup 27.17
45 TestErrorSpam/start 0.93
46 TestErrorSpam/status 0.92
47 TestErrorSpam/pause 3.65
48 TestErrorSpam/unpause 1.27
49 TestErrorSpam/stop 23.86
52 TestFunctional/serial/CopySyncFile 0
53 TestFunctional/serial/StartWithProxy 97.85
54 TestFunctional/serial/AuditLog 0
55 TestFunctional/serial/SoftStart 5.21
56 TestFunctional/serial/KubeContext 0.04
57 TestFunctional/serial/KubectlGetPods 0.21
60 TestFunctional/serial/CacheCmd/cache/add_remote 3.29
61 TestFunctional/serial/CacheCmd/cache/add_local 4.51
62 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.05
63 TestFunctional/serial/CacheCmd/cache/list 0.05
64 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
65 TestFunctional/serial/CacheCmd/cache/cache_reload 1.79
66 TestFunctional/serial/CacheCmd/cache/delete 0.1
67 TestFunctional/serial/MinikubeKubectlCmd 0.11
68 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
69 TestFunctional/serial/ExtraConfig 34.05
70 TestFunctional/serial/ComponentHealth 0.07
71 TestFunctional/serial/LogsCmd 1
72 TestFunctional/serial/LogsFileCmd 1
74 TestFunctional/parallel/ConfigCmd 0.4
75 TestFunctional/parallel/DashboardCmd 4.32
76 TestFunctional/parallel/DryRun 0.62
77 TestFunctional/parallel/InternationalLanguage 0.27
78 TestFunctional/parallel/StatusCmd 1.62
81 TestFunctional/parallel/ServiceCmd 16.44
82 TestFunctional/parallel/AddonsCmd 0.16
83 TestFunctional/parallel/PersistentVolumeClaim 30.23
85 TestFunctional/parallel/SSHCmd 0.67
86 TestFunctional/parallel/CpCmd 0.64
87 TestFunctional/parallel/MySQL 25.05
88 TestFunctional/parallel/FileSync 0.35
89 TestFunctional/parallel/CertSync 1.93
93 TestFunctional/parallel/NodeLabels 0.07
94 TestFunctional/parallel/LoadImage 1.75
95 TestFunctional/parallel/RemoveImage 3.73
96 TestFunctional/parallel/LoadImageFromFile 2.74
97 TestFunctional/parallel/BuildImage 4.59
98 TestFunctional/parallel/ListImages 0.4
99 TestFunctional/parallel/NonActiveRuntimeDisabled 0.68
101 TestFunctional/parallel/Version/short 0.07
102 TestFunctional/parallel/Version/components 1.04
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
106 TestFunctional/parallel/MountCmd/any-port 13.23
108 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
110 TestFunctional/parallel/MountCmd/specific-port 2.07
111 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
112 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
116 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
117 TestFunctional/parallel/ProfileCmd/profile_not_create 0.53
118 TestFunctional/parallel/ProfileCmd/profile_list 0.41
119 TestFunctional/parallel/ProfileCmd/profile_json_output 0.54
120 TestFunctional/delete_busybox_image 0.08
121 TestFunctional/delete_my-image_image 0.04
122 TestFunctional/delete_minikube_cached_images 0.04
126 TestJSONOutput/start/Audit 0
128 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
129 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
131 TestJSONOutput/pause/Audit 0
133 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
134 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
136 TestJSONOutput/unpause/Audit 0
138 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
139 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
141 TestJSONOutput/stop/Audit 0
143 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
144 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
145 TestErrorJSONOutput 0.32
147 TestKicCustomNetwork/create_custom_network 29.57
148 TestKicCustomNetwork/use_default_bridge_network 26.96
149 TestKicExistingNetwork 25.13
150 TestMainNoArgs 0.05
153 TestMultiNode/serial/FreshStart2Nodes 96.18
154 TestMultiNode/serial/DeployApp2Nodes 25.52
156 TestMultiNode/serial/AddNode 26
157 TestMultiNode/serial/ProfileList 0.3
158 TestMultiNode/serial/CopyFile 2.27
159 TestMultiNode/serial/StopNode 2.45
160 TestMultiNode/serial/StartAfterStop 33.29
161 TestMultiNode/serial/RestartKeepsNodes 156.8
162 TestMultiNode/serial/DeleteNode 5.35
163 TestMultiNode/serial/StopMultiNode 41.31
164 TestMultiNode/serial/RestartMultiNode 70.34
165 TestMultiNode/serial/ValidateNameConflict 29.9
171 TestDebPackageInstall/install_amd64_debian:sid/minikube 0
172 TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver 11.42
174 TestDebPackageInstall/install_amd64_debian:latest/minikube 0
175 TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver 10
177 TestDebPackageInstall/install_amd64_debian:10/minikube 0
178 TestDebPackageInstall/install_amd64_debian:10/kvm2-driver 9.41
180 TestDebPackageInstall/install_amd64_debian:9/minikube 0
181 TestDebPackageInstall/install_amd64_debian:9/kvm2-driver 8.29
183 TestDebPackageInstall/install_amd64_ubuntu:latest/minikube 0
184 TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver 14.37
186 TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube 0
187 TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver 13.88
189 TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube 0
190 TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver 14.1
192 TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube 0
193 TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver 12.93
199 TestInsufficientStorage 13.22
202 TestKubernetesUpgrade 124.49
203 TestMissingContainerUpgrade 190.23
212 TestPause/serial/Start 103.02
220 TestNetworkPlugins/group/false 0.62
225 TestStartStop/group/old-k8s-version/serial/FirstStart 104.29
226 TestPause/serial/SecondStartNoReconfiguration 6.65
229 TestStartStop/group/no-preload/serial/FirstStart 115.85
230 TestStartStop/group/old-k8s-version/serial/DeployApp 9.03
231 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.62
232 TestStartStop/group/old-k8s-version/serial/Stop 20.74
234 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
235 TestStartStop/group/old-k8s-version/serial/SecondStart 676.01
236 TestStartStop/group/no-preload/serial/DeployApp 7.48
237 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.65
238 TestStartStop/group/no-preload/serial/Stop 20.79
239 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
240 TestStartStop/group/no-preload/serial/SecondStart 351.67
241 TestPause/serial/Unpause 0.83
244 TestStartStop/group/embed-certs/serial/FirstStart 92.13
245 TestPause/serial/DeletePaused 3.64
246 TestPause/serial/VerifyDeletedResources 2.61
248 TestStartStop/group/default-k8s-different-port/serial/FirstStart 52.88
249 TestStartStop/group/default-k8s-different-port/serial/DeployApp 8.54
250 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.8
251 TestStartStop/group/default-k8s-different-port/serial/Stop 20.75
252 TestStartStop/group/embed-certs/serial/DeployApp 8.53
253 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.72
254 TestStartStop/group/embed-certs/serial/Stop 20.75
255 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.19
256 TestStartStop/group/default-k8s-different-port/serial/SecondStart 344.26
257 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
258 TestStartStop/group/embed-certs/serial/SecondStart 380.74
259 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9.02
260 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
261 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
264 TestStartStop/group/newest-cni/serial/FirstStart 48.81
265 TestStartStop/group/newest-cni/serial/DeployApp 0
266 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.55
267 TestStartStop/group/newest-cni/serial/Stop 20.77
268 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
269 TestStartStop/group/newest-cni/serial/SecondStart 25.28
270 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
271 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
272 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
274 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 5.02
275 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.08
276 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.28
278 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
279 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
280 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
282 TestNetworkPlugins/group/auto/Start 98.83
283 TestNetworkPlugins/group/kindnet/Start 97.64
284 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
285 TestNetworkPlugins/group/enable-default-cni/Start 53.19
286 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.21
287 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.3
289 TestNetworkPlugins/group/auto/KubeletFlags 0.3
290 TestNetworkPlugins/group/auto/NetCatPod 9.4
291 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
292 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
293 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.48
294 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
295 TestNetworkPlugins/group/kindnet/NetCatPod 10.25
296 TestNetworkPlugins/group/auto/DNS 0.16
297 TestNetworkPlugins/group/auto/Localhost 0.18
298 TestNetworkPlugins/group/auto/HairPin 0.15
299 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
300 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
301 TestNetworkPlugins/group/bridge/Start 61.98
302 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
303 TestNetworkPlugins/group/cilium/Start 83.37
304 TestNetworkPlugins/group/kindnet/DNS 0.17
305 TestNetworkPlugins/group/kindnet/Localhost 0.16
306 TestNetworkPlugins/group/kindnet/HairPin 0.19
307 TestNetworkPlugins/group/custom-weave/Start 70.53
308 TestNetworkPlugins/group/calico/Start 74.46
309 TestNetworkPlugins/group/bridge/KubeletFlags 0.35
310 TestNetworkPlugins/group/bridge/NetCatPod 9.52
311 TestNetworkPlugins/group/bridge/DNS 0.49
312 TestNetworkPlugins/group/bridge/Localhost 0.15
313 TestNetworkPlugins/group/bridge/HairPin 0.15
314 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.37
315 TestNetworkPlugins/group/custom-weave/NetCatPod 10.52
316 TestNetworkPlugins/group/cilium/ControllerPod 5.96
317 TestNetworkPlugins/group/cilium/KubeletFlags 0.35
318 TestNetworkPlugins/group/cilium/NetCatPod 11.4
319 TestNetworkPlugins/group/cilium/DNS 0.15
320 TestNetworkPlugins/group/cilium/Localhost 0.16
321 TestNetworkPlugins/group/cilium/HairPin 0.16
322 TestNetworkPlugins/group/calico/ControllerPod 5.02
323 TestNetworkPlugins/group/calico/KubeletFlags 0.28
324 TestNetworkPlugins/group/calico/NetCatPod 11.27
x
+
TestDownloadOnly/v1.14.0/json-events (5.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210816214057-6487 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210816214057-6487 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.858410626s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (5.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
--- PASS: TestDownloadOnly/v1.14.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210816214057-6487
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210816214057-6487: exit status 85 (63.000871ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 21:40:57
	Running on machine: debian-jenkins-agent-13
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 21:40:57.663394    6499 out.go:298] Setting OutFile to fd 1 ...
	I0816 21:40:57.663464    6499 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 21:40:57.663468    6499 out.go:311] Setting ErrFile to fd 2...
	I0816 21:40:57.663471    6499 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 21:40:57.663571    6499 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	W0816 21:40:57.663663    6499 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/config/config.json: no such file or directory
	I0816 21:40:57.663870    6499 out.go:305] Setting JSON to true
	I0816 21:40:57.698797    6499 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-13","uptime":1225,"bootTime":1629148833,"procs":134,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 21:40:57.698909    6499 start.go:121] virtualization: kvm guest
	I0816 21:40:57.701933    6499 notify.go:169] Checking for updates...
	I0816 21:40:57.704554    6499 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 21:40:57.746682    6499 docker.go:132] docker version: linux-19.03.15
	I0816 21:40:57.746753    6499 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 21:40:58.092716    6499 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-16 21:40:57.777368373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 21:40:58.092824    6499 docker.go:244] overlay module found
	I0816 21:40:58.094653    6499 start.go:278] selected driver: docker
	I0816 21:40:58.094669    6499 start.go:751] validating driver "docker" against <nil>
	I0816 21:40:58.095188    6499 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 21:40:58.172165    6499 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-16 21:40:58.127167625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 21:40:58.172247    6499 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0816 21:40:58.172716    6499 start_flags.go:344] Using suggested 8000MB memory alloc based on sys=32179MB, container=32179MB
	I0816 21:40:58.172797    6499 start_flags.go:679] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 21:40:58.172813    6499 cni.go:93] Creating CNI manager for ""
	I0816 21:40:58.172826    6499 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 21:40:58.172837    6499 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 21:40:58.172857    6499 start_flags.go:277] config:
	{Name:download-only-20210816214057-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210816214057-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 21:40:58.174772    6499 cache.go:117] Beginning downloading kic base image for docker with crio
	I0816 21:40:58.176107    6499 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime crio
	I0816 21:40:58.176219    6499 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0816 21:40:58.212600    6499 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4
	I0816 21:40:58.212630    6499 cache.go:56] Caching tarball of preloaded images
	I0816 21:40:58.212857    6499 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime crio
	I0816 21:40:58.214793    6499 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4 ...
	I0816 21:40:58.254056    6499 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:70b8731eaaa1b4de2d1cd60021fc1260 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4
	I0816 21:40:58.258438    6499 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0816 21:40:58.258461    6499 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0816 21:41:02.002324    6499 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4 ...
	I0816 21:41:02.002405    6499 preload.go:254] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210816214057-6487"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.14.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/json-events (8.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210816214057-6487 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210816214057-6487 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.134431618s)
--- PASS: TestDownloadOnly/v1.21.3/json-events (8.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/preload-exists
--- PASS: TestDownloadOnly/v1.21.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210816214057-6487
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210816214057-6487: exit status 85 (61.159973ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 21:41:03
	Running on machine: debian-jenkins-agent-13
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 21:41:03.586977    6642 out.go:298] Setting OutFile to fd 1 ...
	I0816 21:41:03.587047    6642 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 21:41:03.587050    6642 out.go:311] Setting ErrFile to fd 2...
	I0816 21:41:03.587053    6642 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 21:41:03.587151    6642 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	W0816 21:41:03.587241    6642 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/config/config.json: no such file or directory
	I0816 21:41:03.587328    6642 out.go:305] Setting JSON to true
	I0816 21:41:03.621677    6642 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-13","uptime":1231,"bootTime":1629148833,"procs":134,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 21:41:03.621764    6642 start.go:121] virtualization: kvm guest
	I0816 21:41:03.624561    6642 notify.go:169] Checking for updates...
	I0816 21:41:03.626866    6642 config.go:177] Loaded profile config "download-only-20210816214057-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	W0816 21:41:03.626906    6642 start.go:659] api.Load failed for download-only-20210816214057-6487: filestore "download-only-20210816214057-6487": Docker machine "download-only-20210816214057-6487" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0816 21:41:03.626937    6642 driver.go:335] Setting default libvirt URI to qemu:///system
	W0816 21:41:03.626963    6642 start.go:659] api.Load failed for download-only-20210816214057-6487: filestore "download-only-20210816214057-6487": Docker machine "download-only-20210816214057-6487" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0816 21:41:03.669812    6642 docker.go:132] docker version: linux-19.03.15
	I0816 21:41:03.669880    6642 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 21:41:03.741591    6642 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-16 21:41:03.701180238 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 21:41:03.741687    6642 docker.go:244] overlay module found
	I0816 21:41:03.743814    6642 start.go:278] selected driver: docker
	I0816 21:41:03.743835    6642 start.go:751] validating driver "docker" against &{Name:download-only-20210816214057-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210816214057-6487 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 21:41:03.744332    6642 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 21:41:03.822475    6642 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-16 21:41:03.775854082 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 21:41:03.822982    6642 cni.go:93] Creating CNI manager for ""
	I0816 21:41:03.823002    6642 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 21:41:03.823010    6642 start_flags.go:277] config:
	{Name:download-only-20210816214057-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:download-only-20210816214057-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 21:41:03.825194    6642 cache.go:117] Beginning downloading kic base image for docker with crio
	I0816 21:41:03.826693    6642 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0816 21:41:03.826800    6642 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0816 21:41:03.856526    6642 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0816 21:41:03.856552    6642 cache.go:56] Caching tarball of preloaded images
	I0816 21:41:03.856771    6642 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0816 21:41:03.858614    6642 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 ...
	I0816 21:41:03.898476    6642 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:5b844d0f443dc130a4f324a367701516 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0816 21:41:03.908127    6642 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0816 21:41:03.908146    6642 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210816214057-6487"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.21.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/json-events (6.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210816214057-6487 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210816214057-6487 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.669334748s)
--- PASS: TestDownloadOnly/v1.22.0-rc.0/json-events (6.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.22.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210816214057-6487
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210816214057-6487: exit status 85 (64.349738ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 21:41:11
	Running on machine: debian-jenkins-agent-13
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 21:41:11.783311    6789 out.go:298] Setting OutFile to fd 1 ...
	I0816 21:41:11.783378    6789 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 21:41:11.783382    6789 out.go:311] Setting ErrFile to fd 2...
	I0816 21:41:11.783385    6789 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 21:41:11.783485    6789 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	W0816 21:41:11.783588    6789 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/config/config.json: no such file or directory
	I0816 21:41:11.783693    6789 out.go:305] Setting JSON to true
	I0816 21:41:11.817764    6789 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-13","uptime":1239,"bootTime":1629148833,"procs":134,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 21:41:11.817848    6789 start.go:121] virtualization: kvm guest
	I0816 21:41:11.820535    6789 notify.go:169] Checking for updates...
	I0816 21:41:11.822699    6789 config.go:177] Loaded profile config "download-only-20210816214057-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	W0816 21:41:11.822745    6789 start.go:659] api.Load failed for download-only-20210816214057-6487: filestore "download-only-20210816214057-6487": Docker machine "download-only-20210816214057-6487" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0816 21:41:11.822791    6789 driver.go:335] Setting default libvirt URI to qemu:///system
	W0816 21:41:11.822828    6789 start.go:659] api.Load failed for download-only-20210816214057-6487: filestore "download-only-20210816214057-6487": Docker machine "download-only-20210816214057-6487" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0816 21:41:11.865076    6789 docker.go:132] docker version: linux-19.03.15
	I0816 21:41:11.865149    6789 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 21:41:11.940613    6789 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-16 21:41:11.896447835 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 21:41:11.940697    6789 docker.go:244] overlay module found
	I0816 21:41:11.942715    6789 start.go:278] selected driver: docker
	I0816 21:41:11.942733    6789 start.go:751] validating driver "docker" against &{Name:download-only-20210816214057-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:download-only-20210816214057-6487 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 21:41:11.943194    6789 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 21:41:12.023476    6789 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-16 21:41:11.974634383 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 21:41:12.024018    6789 cni.go:93] Creating CNI manager for ""
	I0816 21:41:12.024039    6789 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0816 21:41:12.024050    6789 start_flags.go:277] config:
	{Name:download-only-20210816214057-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:download-only-20210816214057-6487 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 21:41:12.026038    6789 cache.go:117] Beginning downloading kic base image for docker with crio
	I0816 21:41:12.027343    6789 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0816 21:41:12.027392    6789 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0816 21:41:12.062031    6789 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0816 21:41:12.062059    6789 cache.go:56] Caching tarball of preloaded images
	I0816 21:41:12.062287    6789 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0816 21:41:12.064434    6789 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0816 21:41:12.099963    6789 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:c7902b63f7bbc786f5f337da25a17477 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0816 21:41:12.108659    6789 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0816 21:41:12.108681    6789 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0816 21:41:16.580151    6789 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0816 21:41:16.580237    6789 preload.go:254] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210816214057-6487"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:189: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.36s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:201: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20210816214057-6487
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnlyKic (7.95s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20210816214119-6487 --force --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:226: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20210816214119-6487 --force --alsologtostderr --driver=docker  --container-runtime=crio: (6.533346065s)
helpers_test.go:176: Cleaning up "download-docker-20210816214119-6487" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20210816214119-6487
--- PASS: TestDownloadOnlyKic (7.95s)

                                                
                                    
x
+
TestOffline (103.65s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-20210816221142-6487 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-20210816221142-6487 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m40.250104806s)
helpers_test.go:176: Cleaning up "offline-crio-20210816221142-6487" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-20210816221142-6487

                                                
                                                
=== CONT  TestOffline
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-20210816221142-6487: (3.397274102s)
--- PASS: TestOffline (103.65s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: registry stabilized in 13.682708ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-d8gdr" [4ba89676-ca7d-4e37-b69e-ff37274f3367] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.015386997s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-proxy-p4pwl" [b330dd6b-0c8f-447d-aafb-29c153c4385f] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.03532469s
addons_test.go:294: (dbg) Run:  kubectl --context addons-20210816214127-6487 delete po -l run=registry-test --now

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:299: (dbg) Run:  kubectl --context addons-20210816214127-6487 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:299: (dbg) Done: kubectl --context addons-20210816214127-6487 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.536041254s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210816214127-6487 ip
2021/08/16 21:44:24 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210816214127-6487 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.20s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.68s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:361: metrics-server stabilized in 13.546817ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:363: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:343: "metrics-server-77c99ccb96-9s24l" [fecc4cb4-f61b-4298-91d0-1d3127525972] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:363: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.016833743s
addons_test.go:369: (dbg) Run:  kubectl --context addons-20210816214127-6487 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210816214127-6487 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.68s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.17s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:410: tiller-deploy stabilized in 15.402981ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:412: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:343: "tiller-deploy-768d69497-wvx5x" [41a034c9-709c-45c2-ae5a-ea40334e7602] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:412: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.015516593s
addons_test.go:427: (dbg) Run:  kubectl --context addons-20210816214127-6487 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:427: (dbg) Done: kubectl --context addons-20210816214127-6487 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: (4.658697071s)
addons_test.go:432: kubectl --context addons-20210816214127-6487 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: unable to upgrade connection: container helm-test not found in pod helm-test_kube-system
addons_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210816214127-6487 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.17s)

                                                
                                    
x
+
TestAddons/parallel/Olm (69.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: catalog-operator stabilized in 13.279941ms

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:467: olm-operator stabilized in 15.879425ms

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:471: packageserver stabilized in 18.324423ms
addons_test.go:473: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=catalog-operator" in namespace "olm" ...
helpers_test.go:343: "catalog-operator-75d496484d-wxwkk" [e9e86c4e-6c58-410d-b5af-3375bd212d24] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:473: (dbg) TestAddons/parallel/Olm: app=catalog-operator healthy within 5.01117788s

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=olm-operator" in namespace "olm" ...

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "olm-operator-859c88c96-bpxtv" [ddfce9ff-55e0-470d-a95e-9b9d47833ea8] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: (dbg) TestAddons/parallel/Olm: app=olm-operator healthy within 5.033213255s
addons_test.go:479: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=packageserver" in namespace "olm" ...

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-5f7f778fc6-kz29t" [78c6d879-42ff-48d0-a01e-a0fdc3896ebe] Running
helpers_test.go:343: "packageserver-5f7f778fc6-m6bsd" [a641e42d-d060-4168-8bcf-9025274bc4a0] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-5f7f778fc6-kz29t" [78c6d879-42ff-48d0-a01e-a0fdc3896ebe] Running
helpers_test.go:343: "packageserver-5f7f778fc6-m6bsd" [a641e42d-d060-4168-8bcf-9025274bc4a0] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-5f7f778fc6-kz29t" [78c6d879-42ff-48d0-a01e-a0fdc3896ebe] Running
helpers_test.go:343: "packageserver-5f7f778fc6-m6bsd" [a641e42d-d060-4168-8bcf-9025274bc4a0] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-5f7f778fc6-kz29t" [78c6d879-42ff-48d0-a01e-a0fdc3896ebe] Running
helpers_test.go:343: "packageserver-5f7f778fc6-m6bsd" [a641e42d-d060-4168-8bcf-9025274bc4a0] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-5f7f778fc6-kz29t" [78c6d879-42ff-48d0-a01e-a0fdc3896ebe] Running
helpers_test.go:343: "packageserver-5f7f778fc6-m6bsd" [a641e42d-d060-4168-8bcf-9025274bc4a0] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-5f7f778fc6-kz29t" [78c6d879-42ff-48d0-a01e-a0fdc3896ebe] Running
addons_test.go:479: (dbg) TestAddons/parallel/Olm: app=packageserver healthy within 5.008379764s
addons_test.go:482: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "olm.catalogSource=operatorhubio-catalog" in namespace "olm" ...
helpers_test.go:343: "operatorhubio-catalog-hr9q8" [5a12e003-1abd-47c3-a741-11ce1a8e646f] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:482: (dbg) TestAddons/parallel/Olm: olm.catalogSource=operatorhubio-catalog healthy within 5.006680874s
addons_test.go:487: (dbg) Run:  kubectl --context addons-20210816214127-6487 create -f testdata/etcd.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210816214127-6487 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210816214127-6487 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210816214127-6487 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210816214127-6487 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210816214127-6487 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210816214127-6487 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210816214127-6487 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210816214127-6487 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210816214127-6487 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210816214127-6487 get csv -n my-etcd
--- PASS: TestAddons/parallel/Olm (69.43s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 6.926939ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-20210816214127-6487 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210816214127-6487 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-20210816214127-6487 create -f testdata/csi-hostpath-driver/pv-pod.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [41be2884-f531-471d-80ef-a306a0f80cd5] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [41be2884-f531-471d-80ef-a306a0f80cd5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [41be2884-f531-471d-80ef-a306a0f80cd5] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.008156903s
addons_test.go:549: (dbg) Run:  kubectl --context addons-20210816214127-6487 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20210816214127-6487 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20210816214127-6487 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-20210816214127-6487 delete pod task-pv-pod

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:559: (dbg) Done: kubectl --context addons-20210816214127-6487 delete pod task-pv-pod: (2.750058986s)
addons_test.go:565: (dbg) Run:  kubectl --context addons-20210816214127-6487 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-20210816214127-6487 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210816214127-6487 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-20210816214127-6487 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:343: "task-pv-pod-restore" [53090c02-94bc-47c1-8ef7-e25c10416e75] Pending
helpers_test.go:343: "task-pv-pod-restore" [53090c02-94bc-47c1-8ef7-e25c10416e75] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod-restore" [53090c02-94bc-47c1-8ef7-e25c10416e75] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 11.075269482s
addons_test.go:591: (dbg) Run:  kubectl --context addons-20210816214127-6487 delete pod task-pv-pod-restore

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:591: (dbg) Done: kubectl --context addons-20210816214127-6487 delete pod task-pv-pod-restore: (9.321882508s)
addons_test.go:595: (dbg) Run:  kubectl --context addons-20210816214127-6487 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-20210816214127-6487 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210816214127-6487 addons disable csi-hostpath-driver --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:603: (dbg) Done: out/minikube-linux-amd64 -p addons-20210816214127-6487 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.989942609s)
addons_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210816214127-6487 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (49.51s)

                                                
                                    
x
+
TestAddons/parallel/GCPAuth (47.01s)

                                                
                                                
=== RUN   TestAddons/parallel/GCPAuth
=== PAUSE TestAddons/parallel/GCPAuth

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:618: (dbg) Run:  kubectl --context addons-20210816214127-6487 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:624: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [ee63da3f-2c9f-4ede-b1cb-4ffc4bbfd291] Pending

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "busybox" [ee63da3f-2c9f-4ede-b1cb-4ffc4bbfd291] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "busybox" [ee63da3f-2c9f-4ede-b1cb-4ffc4bbfd291] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:624: (dbg) TestAddons/parallel/GCPAuth: integration-test=busybox healthy within 9.005313845s
addons_test.go:630: (dbg) Run:  kubectl --context addons-20210816214127-6487 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:667: (dbg) Run:  kubectl --context addons-20210816214127-6487 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:683: (dbg) Run:  kubectl --context addons-20210816214127-6487 apply -f testdata/private-image.yaml

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:690: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-7ff9c8c74f-fm799" [eee04c4f-b13b-40e8-a929-c5b8d5337b2a] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-7ff9c8c74f-fm799" [eee04c4f-b13b-40e8-a929-c5b8d5337b2a] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:690: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image healthy within 19.004927997s
addons_test.go:696: (dbg) Run:  kubectl --context addons-20210816214127-6487 apply -f testdata/private-image-eu.yaml
addons_test.go:703: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:343: "private-image-eu-5956d58f9f-s2jmw" [39042918-af8c-4c5a-b463-295b53252738] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-eu-5956d58f9f-s2jmw" [39042918-af8c-4c5a-b463-295b53252738] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:703: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image-eu healthy within 12.007416967s
addons_test.go:709: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210816214127-6487 addons disable gcp-auth --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:709: (dbg) Done: out/minikube-linux-amd64 -p addons-20210816214127-6487 addons disable gcp-auth --alsologtostderr -v=1: (5.781636947s)
--- PASS: TestAddons/parallel/GCPAuth (47.01s)

                                                
                                    
x
+
TestCertOptions (228.17s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20210816221525-6487 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20210816221525-6487 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (3m44.806164751s)
cert_options_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20210816221525-6487 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:73: (dbg) Run:  kubectl --context cert-options-20210816221525-6487 config view
helpers_test.go:176: Cleaning up "cert-options-20210816221525-6487" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20210816221525-6487

                                                
                                                
=== CONT  TestCertOptions
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20210816221525-6487: (3.027799554s)
--- PASS: TestCertOptions (228.17s)

                                                
                                    
x
+
TestForceSystemdFlag (38.77s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20210816221142-6487 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20210816221142-6487 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.592700857s)
helpers_test.go:176: Cleaning up "force-systemd-flag-20210816221142-6487" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20210816221142-6487
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20210816221142-6487: (3.173949397s)
--- PASS: TestForceSystemdFlag (38.77s)

                                                
                                    
x
+
TestForceSystemdEnv (34.28s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20210816221453-6487 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20210816221453-6487 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (31.666295223s)
helpers_test.go:176: Cleaning up "force-systemd-env-20210816221453-6487" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20210816221453-6487

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20210816221453-6487: (2.615142106s)
--- PASS: TestForceSystemdEnv (34.28s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.88s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.88s)

                                                
                                    
x
+
TestErrorSpam/setup (27.17s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210816214947-6487 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210816214947-6487 --driver=docker  --container-runtime=crio
error_spam_test.go:78: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20210816214947-6487 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210816214947-6487 --driver=docker  --container-runtime=crio: (27.172803791s)
error_spam_test.go:88: acceptable stderr: "! Your cgroup does not allow setting memory."
--- PASS: TestErrorSpam/setup (27.17s)

                                                
                                    
x
+
TestErrorSpam/start (0.93s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210816214947-6487 --log_dir /tmp/nospam-20210816214947-6487 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210816214947-6487 --log_dir /tmp/nospam-20210816214947-6487 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210816214947-6487 --log_dir /tmp/nospam-20210816214947-6487 start --dry-run
--- PASS: TestErrorSpam/start (0.93s)

                                                
                                    
x
+
TestErrorSpam/status (0.92s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210816214947-6487 --log_dir /tmp/nospam-20210816214947-6487 status
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210816214947-6487 --log_dir /tmp/nospam-20210816214947-6487 status
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210816214947-6487 --log_dir /tmp/nospam-20210816214947-6487 status
--- PASS: TestErrorSpam/status (0.92s)

                                                
                                    
x
+
TestErrorSpam/pause (3.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210816214947-6487 --log_dir /tmp/nospam-20210816214947-6487 pause
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-20210816214947-6487 --log_dir /tmp/nospam-20210816214947-6487 pause: exit status 80 (2.250729459s)

                                                
                                                
-- stdout --
	* Pausing node nospam-20210816214947-6487 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: runc: sudo runc pause 8a292f653733582a5468024c2e36de38ca64b64418bcf69889f8a3a44ae69bfe a8ae5b2d4f7a75e305ef9246195d9d0ddb2c0d595fbb8ef1b72d2f8e78f9df8f: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T21:50:18Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	╭───────────────────────────────────────────────────────────────────────────────╮
	│                                                                               │
	│    * If the above advice does not help, please let us know:                   │
	│      https://github.com/kubernetes/minikube/issues/new/choose                 │
	│                                                                               │
	│    * Please attach the following file to the GitHub issue:                    │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                               │
	╰───────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-linux-amd64 -p nospam-20210816214947-6487 --log_dir /tmp/nospam-20210816214947-6487 pause" failed: exit status 80
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210816214947-6487 --log_dir /tmp/nospam-20210816214947-6487 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210816214947-6487 --log_dir /tmp/nospam-20210816214947-6487 pause
--- PASS: TestErrorSpam/pause (3.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.27s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210816214947-6487 --log_dir /tmp/nospam-20210816214947-6487 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210816214947-6487 --log_dir /tmp/nospam-20210816214947-6487 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210816214947-6487 --log_dir /tmp/nospam-20210816214947-6487 unpause
--- PASS: TestErrorSpam/unpause (1.27s)

                                                
                                    
x
+
TestErrorSpam/stop (23.86s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210816214947-6487 --log_dir /tmp/nospam-20210816214947-6487 stop
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20210816214947-6487 --log_dir /tmp/nospam-20210816214947-6487 stop: (23.597309165s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210816214947-6487 --log_dir /tmp/nospam-20210816214947-6487 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210816214947-6487 --log_dir /tmp/nospam-20210816214947-6487 stop
--- PASS: TestErrorSpam/stop (23.86s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1606: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/test/nested/copy/6487/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (97.85s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:1982: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210816215050-6487 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:1982: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210816215050-6487 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m37.846165448s)
--- PASS: TestFunctional/serial/StartWithProxy (97.85s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.21s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:627: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210816215050-6487 --alsologtostderr -v=8
functional_test.go:627: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210816215050-6487 --alsologtostderr -v=8: (5.204761405s)
functional_test.go:631: soft start took 5.205368298s for "functional-20210816215050-6487" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.21s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:647: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:660: (dbg) Run:  kubectl --context functional-20210816215050-6487 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 cache add k8s.gcr.io/pause:3.1
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 cache add k8s.gcr.io/pause:3.3
functional_test.go:982: (dbg) Done: out/minikube-linux-amd64 -p functional-20210816215050-6487 cache add k8s.gcr.io/pause:3.3: (1.220963542s)
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 cache add k8s.gcr.io/pause:latest
functional_test.go:982: (dbg) Done: out/minikube-linux-amd64 -p functional-20210816215050-6487 cache add k8s.gcr.io/pause:latest: (1.21444325s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (4.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1012: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20210816215050-6487 /tmp/functional-20210816215050-6487402275368
functional_test.go:1012: (dbg) Done: docker build -t minikube-local-cache-test:functional-20210816215050-6487 /tmp/functional-20210816215050-6487402275368: (3.681536027s)
functional_test.go:1024: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 cache add minikube-local-cache-test:functional-20210816215050-6487
functional_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 cache delete minikube-local-cache-test:functional-20210816215050-6487
functional_test.go:1018: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20210816215050-6487
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (4.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1036: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1043: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1078: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (271.987416ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 cache reload
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:678: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 kubectl -- --context functional-20210816215050-6487 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:701: (dbg) Run:  out/kubectl --context functional-20210816215050-6487 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:715: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210816215050-6487 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:715: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210816215050-6487 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.052911548s)
functional_test.go:719: restart took 34.053013692s for "functional-20210816215050-6487" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.05s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:766: (dbg) Run:  kubectl --context functional-20210816215050-6487 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:780: etcd phase: Running
functional_test.go:790: etcd status: Ready
functional_test.go:780: kube-apiserver phase: Running
functional_test.go:790: kube-apiserver status: Ready
functional_test.go:780: kube-controller-manager phase: Running
functional_test.go:790: kube-controller-manager status: Ready
functional_test.go:780: kube-scheduler phase: Running
functional_test.go:790: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 logs
--- PASS: TestFunctional/serial/LogsCmd (1.00s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1181: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 logs --file /tmp/functional-20210816215050-6487343010663/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210816215050-6487 config get cpus: exit status 14 (71.584949ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 config set cpus 2
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 config get cpus
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210816215050-6487 config get cpus: exit status 14 (59.266007ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (4.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:857: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20210816215050-6487 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:862: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20210816215050-6487 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to kill pid 51289: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (4.32s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:919: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210816215050-6487 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:919: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20210816215050-6487 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (263.188642ms)

                                                
                                                
-- stdout --
	* [functional-20210816215050-6487] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 21:53:47.378727   51477 out.go:298] Setting OutFile to fd 1 ...
	I0816 21:53:47.378821   51477 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 21:53:47.378831   51477 out.go:311] Setting ErrFile to fd 2...
	I0816 21:53:47.378834   51477 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 21:53:47.378944   51477 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 21:53:47.379164   51477 out.go:305] Setting JSON to false
	I0816 21:53:47.415519   51477 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-13","uptime":1994,"bootTime":1629148833,"procs":256,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 21:53:47.415604   51477 start.go:121] virtualization: kvm guest
	I0816 21:53:47.417843   51477 out.go:177] * [functional-20210816215050-6487] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 21:53:47.419516   51477 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 21:53:47.420881   51477 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 21:53:47.422309   51477 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 21:53:47.423543   51477 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 21:53:47.424021   51477 config.go:177] Loaded profile config "functional-20210816215050-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 21:53:47.424412   51477 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 21:53:47.476900   51477 docker.go:132] docker version: linux-19.03.15
	I0816 21:53:47.476986   51477 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 21:53:47.567556   51477 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2021-08-16 21:53:47.51687184 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 21:53:47.567646   51477 docker.go:244] overlay module found
	I0816 21:53:47.570303   51477 out.go:177] * Using the docker driver based on existing profile
	I0816 21:53:47.570326   51477 start.go:278] selected driver: docker
	I0816 21:53:47.570332   51477 start.go:751] validating driver "docker" against &{Name:functional-20210816215050-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210816215050-6487 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provis
ioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 21:53:47.570448   51477 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0816 21:53:47.570482   51477 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 21:53:47.570506   51477 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0816 21:53:47.572752   51477 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 21:53:47.574721   51477 out.go:177] 
	W0816 21:53:47.574795   51477 out.go:242] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0816 21:53:47.576138   51477 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:934: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210816215050-6487 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:956: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210816215050-6487 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:956: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20210816215050-6487 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (270.236321ms)

                                                
                                                
-- stdout --
	* [functional-20210816215050-6487] minikube v1.22.0 sur Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	  - MINIKUBE_LOCATION=12230
	* Utilisation du pilote docker basé sur le profil existant
	  - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 21:53:47.100418   51338 out.go:298] Setting OutFile to fd 1 ...
	I0816 21:53:47.100506   51338 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 21:53:47.100511   51338 out.go:311] Setting ErrFile to fd 2...
	I0816 21:53:47.100514   51338 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 21:53:47.100650   51338 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 21:53:47.100864   51338 out.go:305] Setting JSON to false
	I0816 21:53:47.146230   51338 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-13","uptime":1994,"bootTime":1629148833,"procs":263,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 21:53:47.146419   51338 start.go:121] virtualization: kvm guest
	I0816 21:53:47.149650   51338 out.go:177] * [functional-20210816215050-6487] minikube v1.22.0 sur Debian 9.13 (kvm/amd64)
	I0816 21:53:47.151471   51338 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 21:53:47.152953   51338 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 21:53:47.154446   51338 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 21:53:47.156015   51338 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 21:53:47.156559   51338 config.go:177] Loaded profile config "functional-20210816215050-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 21:53:47.157122   51338 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 21:53:47.212511   51338 docker.go:132] docker version: linux-19.03.15
	I0816 21:53:47.212594   51338 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 21:53:47.304027   51338 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2021-08-16 21:53:47.254387979 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 21:53:47.304130   51338 docker.go:244] overlay module found
	I0816 21:53:47.306643   51338 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0816 21:53:47.306668   51338 start.go:278] selected driver: docker
	I0816 21:53:47.306674   51338 start.go:751] validating driver "docker" against &{Name:functional-20210816215050-6487 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210816215050-6487 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provis
ioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 21:53:47.306772   51338 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0816 21:53:47.306803   51338 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 21:53:47.306818   51338 out.go:242] ! Votre groupe de contrôle ne permet pas de définir la mémoire.
	! Votre groupe de contrôle ne permet pas de définir la mémoire.
	I0816 21:53:47.308503   51338 out.go:177]   - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 21:53:47.310521   51338 out.go:177] 
	W0816 21:53:47.310652   51338 out.go:242] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0816 21:53:47.311928   51338 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:809: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 status
functional_test.go:815: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:826: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (16.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1357: (dbg) Run:  kubectl --context functional-20210816215050-6487 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1363: (dbg) Run:  kubectl --context functional-20210816215050-6487 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:343: "hello-node-6cbfcd7cbc-s9288" [bea6af32-1e00-45e7-a234-910ba5e6b6bf] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-6cbfcd7cbc-s9288" [bea6af32-1e00-45e7-a234-910ba5e6b6bf] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 14.005965609s
functional_test.go:1372: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 service list
functional_test.go:1372: (dbg) Done: out/minikube-linux-amd64 -p functional-20210816215050-6487 service list: (1.31021571s)
functional_test.go:1385: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 service --namespace=default --https --url hello-node
functional_test.go:1394: found endpoint: https://192.168.49.2:30889
functional_test.go:1405: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 service hello-node --url --format={{.IP}}
functional_test.go:1414: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 service hello-node --url
functional_test.go:1420: found endpoint for hello-node: http://192.168.49.2:30889
functional_test.go:1431: Attempting to fetch http://192.168.49.2:30889 ...
functional_test.go:1450: http://192.168.49.2:30889: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-6cbfcd7cbc-s9288

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30889
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmd (16.44s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1465: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 addons list
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (30.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:343: "storage-provisioner" [9c8d45c8-3cc8-432d-bbc7-a64b412a9068] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009874976s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20210816215050-6487 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20210816215050-6487 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20210816215050-6487 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210816215050-6487 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [6713447a-6c7b-4783-ac4c-36c8b8711e06] Pending
helpers_test.go:343: "sp-pod" [6713447a-6c7b-4783-ac4c-36c8b8711e06] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [6713447a-6c7b-4783-ac4c-36c8b8711e06] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.02157798s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20210816215050-6487 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20210816215050-6487 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20210816215050-6487 delete -f testdata/storage-provisioner/pod.yaml: (2.166630678s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210816215050-6487 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [a35816a8-49bd-447b-9642-489ec0e1bdd9] Pending
helpers_test.go:343: "sp-pod" [a35816a8-49bd-447b-9642-489ec0e1bdd9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:343: "sp-pod" [a35816a8-49bd-447b-9642-489ec0e1bdd9] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.00602344s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20210816215050-6487 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (30.23s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1498: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1515: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1546: (dbg) Run:  kubectl --context functional-20210816215050-6487 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1551: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:343: "mysql-9bbbc5bbb-tzd8q" [acbc5179-2c50-462a-866f-c27e8784ff9e] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-9bbbc5bbb-tzd8q" [acbc5179-2c50-462a-866f-c27e8784ff9e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-9bbbc5bbb-tzd8q" [acbc5179-2c50-462a-866f-c27e8784ff9e] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1551: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.016283337s
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210816215050-6487 exec mysql-9bbbc5bbb-tzd8q -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210816215050-6487 exec mysql-9bbbc5bbb-tzd8q -- mysql -ppassword -e "show databases;": exit status 1 (166.860389ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210816215050-6487 exec mysql-9bbbc5bbb-tzd8q -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210816215050-6487 exec mysql-9bbbc5bbb-tzd8q -- mysql -ppassword -e "show databases;": exit status 1 (155.689115ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210816215050-6487 exec mysql-9bbbc5bbb-tzd8q -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210816215050-6487 exec mysql-9bbbc5bbb-tzd8q -- mysql -ppassword -e "show databases;": exit status 1 (294.720225ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210816215050-6487 exec mysql-9bbbc5bbb-tzd8q -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.05s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1678: Checking for existence of /etc/test/nested/copy/6487/hosts within VM
functional_test.go:1679: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh "sudo cat /etc/test/nested/copy/6487/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1684: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /etc/ssl/certs/6487.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh "sudo cat /etc/ssl/certs/6487.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /usr/share/ca-certificates/6487.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh "sudo cat /usr/share/ca-certificates/6487.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1746: Checking for existence of /etc/ssl/certs/64872.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh "sudo cat /etc/ssl/certs/64872.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1746: Checking for existence of /usr/share/ca-certificates/64872.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh "sudo cat /usr/share/ca-certificates/64872.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1746: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:216: (dbg) Run:  kubectl --context functional-20210816215050-6487 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImage (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImage
=== PAUSE TestFunctional/parallel/LoadImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:239: (dbg) Run:  docker pull busybox:1.33

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:246: (dbg) Run:  docker tag busybox:1.33 docker.io/library/busybox:load-functional-20210816215050-6487
functional_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 image load docker.io/library/busybox:load-functional-20210816215050-6487

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p functional-20210816215050-6487 image load docker.io/library/busybox:load-functional-20210816215050-6487: (1.163378529s)
functional_test.go:373: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210816215050-6487 -- sudo crictl inspecti docker.io/library/busybox:load-functional-20210816215050-6487
--- PASS: TestFunctional/parallel/LoadImage (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/RemoveImage (3.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/RemoveImage
=== PAUSE TestFunctional/parallel/RemoveImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:331: (dbg) Run:  docker pull busybox:1.32
functional_test.go:338: (dbg) Run:  docker tag busybox:1.32 docker.io/library/busybox:remove-functional-20210816215050-6487

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:344: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 image load docker.io/library/busybox:remove-functional-20210816215050-6487

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:344: (dbg) Done: out/minikube-linux-amd64 -p functional-20210816215050-6487 image load docker.io/library/busybox:remove-functional-20210816215050-6487: (2.508045017s)
functional_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 image rm docker.io/library/busybox:remove-functional-20210816215050-6487

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210816215050-6487 -- sudo crictl images
--- PASS: TestFunctional/parallel/RemoveImage (3.73s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImageFromFile (2.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImageFromFile
=== PAUSE TestFunctional/parallel/LoadImageFromFile

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:279: (dbg) Run:  docker pull busybox:1.31

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:286: (dbg) Run:  docker tag busybox:1.31 docker.io/library/busybox:load-from-file-functional-20210816215050-6487

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:293: (dbg) Run:  docker save -o busybox.tar docker.io/library/busybox:load-from-file-functional-20210816215050-6487
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 image load /home/jenkins/workspace/Docker_Linux_crio_integration/busybox.tar

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p functional-20210816215050-6487 image load /home/jenkins/workspace/Docker_Linux_crio_integration/busybox.tar: (1.832166843s)
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210816215050-6487 -- sudo crictl images
--- PASS: TestFunctional/parallel/LoadImageFromFile (2.74s)

                                                
                                    
x
+
TestFunctional/parallel/BuildImage (4.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/BuildImage
=== PAUSE TestFunctional/parallel/BuildImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 image build -t localhost/my-image:functional-20210816215050-6487 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-20210816215050-6487 image build -t localhost/my-image:functional-20210816215050-6487 testdata/build: (4.272545995s)
functional_test.go:412: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20210816215050-6487 image build -t localhost/my-image:functional-20210816215050-6487 testdata/build:
STEP 1: FROM busybox
STEP 2: RUN true
--> b2eb6735b94
STEP 3: ADD content.txt /
STEP 4: COMMIT localhost/my-image:functional-20210816215050-6487
--> f837cb090d9
Successfully tagged localhost/my-image:functional-20210816215050-6487
f837cb090d948b5a8f3955c1f65687d3f998e9d11c3fa793aeb7d5f5f42e4304
functional_test.go:415: (dbg) Stderr: out/minikube-linux-amd64 -p functional-20210816215050-6487 image build -t localhost/my-image:functional-20210816215050-6487 testdata/build:
Resolved "busybox" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull docker.io/library/busybox:latest...
Getting image source signatures
Copying blob sha256:b71f96345d44b237decc0c2d6c2f9ad0d17fde83dad7579608f1f0764d9686f2
Copying config sha256:69593048aa3acfee0f75f20b77acb549de2472063053f6730c4091b53f2dfb02
Writing manifest to image destination
Storing signatures
functional_test.go:373: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210816215050-6487 -- sudo crictl inspecti localhost/my-image:functional-20210816215050-6487
--- PASS: TestFunctional/parallel/BuildImage (4.59s)

                                                
                                    
x
+
TestFunctional/parallel/ListImages (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ListImages
=== PAUSE TestFunctional/parallel/ListImages

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:446: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20210816215050-6487 image ls:
localhost/minikube-local-cache-test:functional-20210816215050-6487
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.21.3
k8s.gcr.io/kube-proxy:v1.21.3
k8s.gcr.io/kube-controller-manager:v1.21.3
k8s.gcr.io/kube-apiserver:v1.21.3
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kubernetesui/metrics-scraper:v1.0.4
docker.io/kubernetesui/dashboard:v2.1.0
docker.io/kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestFunctional/parallel/ListImages (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh "sudo systemctl is-active docker": exit status 1 (367.757311ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1774: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh "sudo systemctl is-active containerd"
functional_test.go:1774: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh "sudo systemctl is-active containerd": exit status 1 (307.863204ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2003: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2016: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2016: (dbg) Done: out/minikube-linux-amd64 -p functional-20210816215050-6487 version -o=json --components: (1.041471676s)
--- PASS: TestFunctional/parallel/Version/components (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 update-context --alsologtostderr -v=2
2021/08/16 21:53:49 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20210816215050-6487 /tmp/mounttest194741146:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:110: wrote "test-1629150805619218598" to /tmp/mounttest194741146/created-by-test
functional_test_mount_test.go:110: wrote "test-1629150805619218598" to /tmp/mounttest194741146/created-by-test-removed-by-pod
functional_test_mount_test.go:110: wrote "test-1629150805619218598" to /tmp/mounttest194741146/test-1629150805619218598
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (358.189248ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:136: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 16 21:53 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 16 21:53 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 16 21:53 test-1629150805619218598
functional_test_mount_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh cat /mount-9p/test-1629150805619218598

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:151: (dbg) Run:  kubectl --context functional-20210816215050-6487 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:343: "busybox-mount" [c521187e-7b53-4a82-820f-e8a0f7f02f0c] Pending
helpers_test.go:343: "busybox-mount" [c521187e-7b53-4a82-820f-e8a0f7f02f0c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [c521187e-7b53-4a82-820f-e8a0f7f02f0c] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 10.005398026s
functional_test_mount_test.go:172: (dbg) Run:  kubectl --context functional-20210816215050-6487 logs busybox-mount
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:93: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:97: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210816215050-6487 /tmp/mounttest194741146:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (13.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:126: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20210816215050-6487 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20210816215050-6487 /tmp/mounttest287719473:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (285.011678ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:269: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh -- ls -la /mount-9p
functional_test_mount_test.go:273: guest mount directory contents
total 0
functional_test_mount_test.go:275: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210816215050-6487 /tmp/mounttest287719473:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:276: reading mount text
functional_test_mount_test.go:290: done reading mount text
functional_test_mount_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh "sudo umount -f /mount-9p": exit status 1 (286.070271ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:244: "out/minikube-linux-amd64 -p functional-20210816215050-6487 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:246: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210816215050-6487 /tmp/mounttest287719473:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:164: (dbg) Run:  kubectl --context functional-20210816215050-6487 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:229: tunnel at http://10.101.174.207 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:364: (dbg) stopping [out/minikube-linux-amd64 -p functional-20210816215050-6487 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1202: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1206: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1240: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1245: Took "354.494442ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1254: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1259: Took "55.710921ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1295: Took "488.679282ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1303: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1308: Took "53.850477ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.54s)

                                                
                                    
x
+
TestFunctional/delete_busybox_image (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_busybox_image
functional_test.go:183: (dbg) Run:  docker rmi -f docker.io/library/busybox:load-functional-20210816215050-6487
functional_test.go:188: (dbg) Run:  docker rmi -f docker.io/library/busybox:remove-functional-20210816215050-6487
--- PASS: TestFunctional/delete_busybox_image (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:195: (dbg) Run:  docker rmi -f localhost/my-image:functional-20210816215050-6487
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:203: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20210816215050-6487
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:146: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20210816215550-6487 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:146: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20210816215550-6487 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (91.244492ms)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[json-output-error-20210816215550-6487] minikube v1.22.0 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"186d31a8-9373-4efe-8325-780761bf6dfd","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig"},"datacontenttype":"application/json","id":"3af2d65d-8491-4a6a-98a7-f541f608a68a","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"21c51e8c-aa53-4490-a615-a4cedf19c14e","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube"},"datacontenttype":"application/json","id":"d51af7ea-4662-4468-8131-6018baf3e2c6","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=12230"},"datacontenttype":"application/json","id":"c73f8e82-a3fe-4d7b-96cd-4a3212c6a00a","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""},"datacontenttype":"application/json","id":"98c25cdf-71a7-4372-b7a3-f3e7e35e8220","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20210816215550-6487" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20210816215550-6487
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (29.57s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20210816215550-6487 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20210816215550-6487 --network=: (27.098487027s)
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20210816215550-6487" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20210816215550-6487
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20210816215550-6487: (2.437134159s)
--- PASS: TestKicCustomNetwork/create_custom_network (29.57s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.96s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20210816215620-6487 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20210816215620-6487 --network=bridge: (24.668258985s)
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20210816215620-6487" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20210816215620-6487
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20210816215620-6487: (2.257744209s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.96s)

                                                
                                    
x
+
TestKicExistingNetwork (25.13s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20210816215647-6487 --network=existing-network
E0816 21:56:55.295887    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20210816215647-6487 --network=existing-network: (22.497154657s)
helpers_test.go:176: Cleaning up "existing-network-20210816215647-6487" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20210816215647-6487
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20210816215647-6487: (2.391208643s)
--- PASS: TestKicExistingNetwork (25.13s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (96.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210816215712-6487 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0816 21:58:20.660422    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816215050-6487/client.crt: no such file or directory
E0816 21:58:20.665722    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816215050-6487/client.crt: no such file or directory
E0816 21:58:20.675942    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816215050-6487/client.crt: no such file or directory
E0816 21:58:20.696194    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816215050-6487/client.crt: no such file or directory
E0816 21:58:20.736463    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816215050-6487/client.crt: no such file or directory
E0816 21:58:20.816764    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816215050-6487/client.crt: no such file or directory
E0816 21:58:20.977223    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816215050-6487/client.crt: no such file or directory
E0816 21:58:21.297770    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816215050-6487/client.crt: no such file or directory
E0816 21:58:21.937915    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816215050-6487/client.crt: no such file or directory
E0816 21:58:23.218078    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816215050-6487/client.crt: no such file or directory
E0816 21:58:25.778467    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816215050-6487/client.crt: no such file or directory
E0816 21:58:30.898715    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816215050-6487/client.crt: no such file or directory
E0816 21:58:41.138998    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816215050-6487/client.crt: no such file or directory
multinode_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210816215712-6487 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m35.66394945s)
multinode_test.go:87: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215712-6487 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (96.18s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (25.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:462: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210816215712-6487 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210816215712-6487 -- rollout status deployment/busybox
E0816 21:59:01.620183    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816215050-6487/client.crt: no such file or directory
E0816 21:59:11.452158    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/client.crt: no such file or directory
multinode_test.go:467: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20210816215712-6487 -- rollout status deployment/busybox: (23.740051669s)
multinode_test.go:473: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210816215712-6487 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:485: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210816215712-6487 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210816215712-6487 -- exec busybox-84b6686758-lw52x -- nslookup kubernetes.io
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210816215712-6487 -- exec busybox-84b6686758-v4kzv -- nslookup kubernetes.io
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210816215712-6487 -- exec busybox-84b6686758-lw52x -- nslookup kubernetes.default
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210816215712-6487 -- exec busybox-84b6686758-v4kzv -- nslookup kubernetes.default
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210816215712-6487 -- exec busybox-84b6686758-lw52x -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210816215712-6487 -- exec busybox-84b6686758-v4kzv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (25.52s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (26s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:106: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210816215712-6487 -v 3 --alsologtostderr
E0816 21:59:39.136934    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/client.crt: no such file or directory
multinode_test.go:106: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20210816215712-6487 -v 3 --alsologtostderr: (25.276374683s)
multinode_test.go:112: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215712-6487 status --alsologtostderr
E0816 21:59:42.581265    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816215050-6487/client.crt: no such file or directory
--- PASS: TestMultiNode/serial/AddNode (26.00s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:128: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:169: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215712-6487 status --output json --alsologtostderr
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215712-6487 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215712-6487 ssh "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215712-6487 cp testdata/cp-test.txt multinode-20210816215712-6487-m02:/home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215712-6487 ssh -n multinode-20210816215712-6487-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215712-6487 cp testdata/cp-test.txt multinode-20210816215712-6487-m03:/home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215712-6487 ssh -n multinode-20210816215712-6487-m03 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestMultiNode/serial/CopyFile (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:191: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215712-6487 node stop m03
multinode_test.go:191: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210816215712-6487 node stop m03: (1.3476401s)
multinode_test.go:197: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215712-6487 status
multinode_test.go:197: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210816215712-6487 status: exit status 7 (545.273998ms)

                                                
                                                
-- stdout --
	multinode-20210816215712-6487
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210816215712-6487-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210816215712-6487-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:204: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215712-6487 status --alsologtostderr
multinode_test.go:204: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210816215712-6487 status --alsologtostderr: exit status 7 (553.019767ms)

                                                
                                                
-- stdout --
	multinode-20210816215712-6487
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210816215712-6487-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210816215712-6487-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 21:59:47.536110   83177 out.go:298] Setting OutFile to fd 1 ...
	I0816 21:59:47.536310   83177 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 21:59:47.536321   83177 out.go:311] Setting ErrFile to fd 2...
	I0816 21:59:47.536324   83177 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 21:59:47.536441   83177 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 21:59:47.536630   83177 out.go:305] Setting JSON to false
	I0816 21:59:47.536651   83177 mustload.go:65] Loading cluster: multinode-20210816215712-6487
	I0816 21:59:47.536975   83177 config.go:177] Loaded profile config "multinode-20210816215712-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 21:59:47.536989   83177 status.go:253] checking status of multinode-20210816215712-6487 ...
	I0816 21:59:47.537356   83177 cli_runner.go:115] Run: docker container inspect multinode-20210816215712-6487 --format={{.State.Status}}
	I0816 21:59:47.576068   83177 status.go:328] multinode-20210816215712-6487 host status = "Running" (err=<nil>)
	I0816 21:59:47.576116   83177 host.go:66] Checking if "multinode-20210816215712-6487" exists ...
	I0816 21:59:47.576403   83177 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210816215712-6487
	I0816 21:59:47.613729   83177 host.go:66] Checking if "multinode-20210816215712-6487" exists ...
	I0816 21:59:47.614030   83177 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 21:59:47.614102   83177 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210816215712-6487
	I0816 21:59:47.651381   83177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/multinode-20210816215712-6487/id_rsa Username:docker}
	I0816 21:59:47.748629   83177 ssh_runner.go:149] Run: systemctl --version
	I0816 21:59:47.752001   83177 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 21:59:47.761108   83177 kubeconfig.go:93] found "multinode-20210816215712-6487" server: "https://192.168.49.2:8443"
	I0816 21:59:47.761132   83177 api_server.go:164] Checking apiserver status ...
	I0816 21:59:47.761157   83177 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 21:59:47.778708   83177 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/1321/cgroup
	I0816 21:59:47.785177   83177 api_server.go:180] apiserver freezer: "2:freezer:/docker/1f5cc52eda7fb88b119f22928af5c6f8d89c12e3db9356cd5a372832f7528cc3/system.slice/crio-2028867588d06f67474159b36c46071c2a70760b5b4184e6d981c0c60ca4c0ea.scope"
	I0816 21:59:47.785228   83177 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/1f5cc52eda7fb88b119f22928af5c6f8d89c12e3db9356cd5a372832f7528cc3/system.slice/crio-2028867588d06f67474159b36c46071c2a70760b5b4184e6d981c0c60ca4c0ea.scope/freezer.state
	I0816 21:59:47.790901   83177 api_server.go:202] freezer state: "THAWED"
	I0816 21:59:47.790923   83177 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0816 21:59:47.795215   83177 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0816 21:59:47.795234   83177 status.go:419] multinode-20210816215712-6487 apiserver status = Running (err=<nil>)
	I0816 21:59:47.795280   83177 status.go:255] multinode-20210816215712-6487 status: &{Name:multinode-20210816215712-6487 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 21:59:47.795306   83177 status.go:253] checking status of multinode-20210816215712-6487-m02 ...
	I0816 21:59:47.795548   83177 cli_runner.go:115] Run: docker container inspect multinode-20210816215712-6487-m02 --format={{.State.Status}}
	I0816 21:59:47.832483   83177 status.go:328] multinode-20210816215712-6487-m02 host status = "Running" (err=<nil>)
	I0816 21:59:47.832503   83177 host.go:66] Checking if "multinode-20210816215712-6487-m02" exists ...
	I0816 21:59:47.832772   83177 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210816215712-6487-m02
	I0816 21:59:47.868475   83177 host.go:66] Checking if "multinode-20210816215712-6487-m02" exists ...
	I0816 21:59:47.868749   83177 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 21:59:47.868799   83177 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210816215712-6487-m02
	I0816 21:59:47.906827   83177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32812 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/multinode-20210816215712-6487-m02/id_rsa Username:docker}
	I0816 21:59:47.991974   83177 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 21:59:48.000112   83177 status.go:255] multinode-20210816215712-6487-m02 status: &{Name:multinode-20210816215712-6487-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0816 21:59:48.000174   83177 status.go:253] checking status of multinode-20210816215712-6487-m03 ...
	I0816 21:59:48.000403   83177 cli_runner.go:115] Run: docker container inspect multinode-20210816215712-6487-m03 --format={{.State.Status}}
	I0816 21:59:48.039114   83177 status.go:328] multinode-20210816215712-6487-m03 host status = "Stopped" (err=<nil>)
	I0816 21:59:48.039151   83177 status.go:341] host is not running, skipping remaining checks
	I0816 21:59:48.039156   83177 status.go:255] multinode-20210816215712-6487-m03 status: &{Name:multinode-20210816215712-6487-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.45s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (33.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:225: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:235: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215712-6487 node start m03 --alsologtostderr
multinode_test.go:235: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210816215712-6487 node start m03 --alsologtostderr: (32.473009643s)
multinode_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215712-6487 status
multinode_test.go:256: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (33.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (156.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:264: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210816215712-6487
multinode_test.go:271: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20210816215712-6487
multinode_test.go:271: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20210816215712-6487: (42.169728361s)
multinode_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210816215712-6487 --wait=true -v=8 --alsologtostderr
E0816 22:01:04.502189    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816215050-6487/client.crt: no such file or directory
multinode_test.go:276: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210816215712-6487 --wait=true -v=8 --alsologtostderr: (1m54.528986851s)
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210816215712-6487
--- PASS: TestMultiNode/serial/RestartKeepsNodes (156.80s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:375: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215712-6487 node delete m03
multinode_test.go:375: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210816215712-6487 node delete m03: (4.678889472s)
multinode_test.go:381: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215712-6487 status --alsologtostderr
multinode_test.go:395: (dbg) Run:  docker volume ls
multinode_test.go:405: (dbg) Run:  kubectl get nodes
multinode_test.go:413: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (41.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215712-6487 stop
E0816 22:03:20.662176    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816215050-6487/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210816215712-6487 stop: (41.062355477s)
multinode_test.go:301: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215712-6487 status
multinode_test.go:301: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210816215712-6487 status: exit status 7 (123.737855ms)

                                                
                                                
-- stdout --
	multinode-20210816215712-6487
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210816215712-6487-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215712-6487 status --alsologtostderr
multinode_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210816215712-6487 status --alsologtostderr: exit status 7 (123.785188ms)

                                                
                                                
-- stdout --
	multinode-20210816215712-6487
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210816215712-6487-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 22:03:44.710836   95953 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:03:44.710929   95953 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:03:44.710939   95953 out.go:311] Setting ErrFile to fd 2...
	I0816 22:03:44.710947   95953 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:03:44.711058   95953 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:03:44.711782   95953 out.go:305] Setting JSON to false
	I0816 22:03:44.711813   95953 mustload.go:65] Loading cluster: multinode-20210816215712-6487
	I0816 22:03:44.712518   95953 config.go:177] Loaded profile config "multinode-20210816215712-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 22:03:44.712545   95953 status.go:253] checking status of multinode-20210816215712-6487 ...
	I0816 22:03:44.712956   95953 cli_runner.go:115] Run: docker container inspect multinode-20210816215712-6487 --format={{.State.Status}}
	I0816 22:03:44.750019   95953 status.go:328] multinode-20210816215712-6487 host status = "Stopped" (err=<nil>)
	I0816 22:03:44.750038   95953 status.go:341] host is not running, skipping remaining checks
	I0816 22:03:44.750044   95953 status.go:255] multinode-20210816215712-6487 status: &{Name:multinode-20210816215712-6487 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 22:03:44.750074   95953 status.go:253] checking status of multinode-20210816215712-6487-m02 ...
	I0816 22:03:44.750306   95953 cli_runner.go:115] Run: docker container inspect multinode-20210816215712-6487-m02 --format={{.State.Status}}
	I0816 22:03:44.785997   95953 status.go:328] multinode-20210816215712-6487-m02 host status = "Stopped" (err=<nil>)
	I0816 22:03:44.786027   95953 status.go:341] host is not running, skipping remaining checks
	I0816 22:03:44.786034   95953 status.go:255] multinode-20210816215712-6487-m02 status: &{Name:multinode-20210816215712-6487-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (41.31s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (70.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:325: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:335: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210816215712-6487 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0816 22:03:48.342361    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816215050-6487/client.crt: no such file or directory
E0816 22:04:11.452164    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/client.crt: no such file or directory
multinode_test.go:335: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210816215712-6487 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m9.64302527s)
multinode_test.go:341: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215712-6487 status --alsologtostderr
multinode_test.go:355: (dbg) Run:  kubectl get nodes
multinode_test.go:363: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (70.34s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (29.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:424: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210816215712-6487
multinode_test.go:433: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210816215712-6487-m02 --driver=docker  --container-runtime=crio
multinode_test.go:433: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20210816215712-6487-m02 --driver=docker  --container-runtime=crio: exit status 14 (98.598564ms)

                                                
                                                
-- stdout --
	* [multinode-20210816215712-6487-m02] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	  - MINIKUBE_LOCATION=12230
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20210816215712-6487-m02' is duplicated with machine name 'multinode-20210816215712-6487-m02' in profile 'multinode-20210816215712-6487'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210816215712-6487-m03 --driver=docker  --container-runtime=crio
multinode_test.go:441: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210816215712-6487-m03 --driver=docker  --container-runtime=crio: (26.679246616s)
multinode_test.go:448: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210816215712-6487
multinode_test.go:448: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20210816215712-6487: exit status 80 (259.084228ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20210816215712-6487
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20210816215712-6487-m03 already exists in multinode-20210816215712-6487-m03 profile
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	╭─────────────────────────────────────────────────────────────────────────────╮
	│                                                                             │
	│    * If the above advice does not help, please let us know:                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose               │
	│                                                                             │
	│    * Please attach the following file to the GitHub issue:                  │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:453: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20210816215712-6487-m03
multinode_test.go:453: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20210816215712-6487-m03: (2.812436843s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (29.90s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (11.42s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (11.419593521s)
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (11.42s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (10s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (10.001408712s)
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (10.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (9.41s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (9.405401737s)
--- PASS: TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (9.41s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:9/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (8.29s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (8.292323586s)
--- PASS: TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (8.29s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (14.37s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (14.372115491s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (14.37s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (13.88s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (13.875774188s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (13.88s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (14.1s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (14.101569924s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (14.10s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (12.93s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (12.92571264s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (12.93s)

                                                
                                    
x
+
TestInsufficientStorage (13.22s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20210816221129-6487 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20210816221129-6487 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (6.397060522s)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[insufficient-storage-20210816221129-6487] minikube v1.22.0 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"ee4bb931-7775-4d64-9938-041152a7c3a9","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig"},"datacontenttype":"application/json","id":"c2a9074d-5d0f-480a-a14a-121b49d32c22","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"cc084bb3-76ae-4e7f-b221-296f25d98707","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube"},"datacontenttype":"application/json","id":"07d36299-faa4-4e65-ab17-e8c3ced2bfc2","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=12230"},"datacontenttype":"application/json","id":"d0dee2e9-6c16-40ba-af2e-e307a5a167de","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"},"datacontenttype":"application/json","id":"56e685d1-fe3a-4e5b-b4c6-61197e519753","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"},"datacontenttype":"application/json","id":"839f2056-034a-44e9-ba64-061ca18224f7","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"Your cgroup does not allow setting memory."},"datacontenttype":"application/json","id":"8848dd88-407c-4928-9151-ba47382074db","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.warning"}
	{"data":{"message":"More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities"},"datacontenttype":"application/json","id":"00fc29d4-c54f-471b-ab4a-32dcfc0c3aaa","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20210816221129-6487 in cluster insufficient-storage-20210816221129-6487","name":"Starting Node","totalsteps":"19"},"datacontenttype":"application/json","id":"1bc461ee-cd3f-4453-ac9c-3ee96202fc89","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"},"datacontenttype":"application/json","id":"dcda0a78-d6c8-4991-853c-ad29a6ecd962","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"},"datacontenttype":"application/json","id":"90998db0-2423-4848-8867-9207d6160feb","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""},"datacontenttype":"application/json","id":"f5256b6d-35ae-42a2-92ca-0c185cb7aa30","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20210816221129-6487 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20210816221129-6487 --output=json --layout=cluster: exit status 7 (274.60122ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210816221129-6487","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210816221129-6487","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:11:36.300394  146449 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210816221129-6487" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20210816221129-6487 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20210816221129-6487 --output=json --layout=cluster: exit status 7 (274.889107ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210816221129-6487","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210816221129-6487","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:11:36.576387  146507 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210816221129-6487" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	E0816 22:11:36.586854  146507 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/insufficient-storage-20210816221129-6487/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-20210816221129-6487" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20210816221129-6487
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20210816221129-6487: (6.276539267s)
--- PASS: TestInsufficientStorage (13.22s)

                                                
                                    
x
+
TestKubernetesUpgrade (124.49s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210816221144-6487 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210816221144-6487 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (49.370709401s)
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210816221144-6487
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210816221144-6487: (2.13514819s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20210816221144-6487 status --format={{.Host}}
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20210816221144-6487 status --format={{.Host}}: exit status 7 (127.908072ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:236: status error: exit status 7 (may be ok)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210816221144-6487 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0816 22:13:20.672037    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816215050-6487/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:245: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210816221144-6487 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (48.114501176s)
version_upgrade_test.go:250: (dbg) Run:  kubectl --context kubernetes-upgrade-20210816221144-6487 version --output=json
version_upgrade_test.go:269: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:271: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210816221144-6487 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:271: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210816221144-6487 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker  --container-runtime=crio: exit status 106 (150.106494ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20210816221144-6487] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	  - MINIKUBE_LOCATION=12230
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.22.0-rc.0 cluster to v1.14.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.14.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20210816221144-6487
	    minikube start -p kubernetes-upgrade-20210816221144-6487 --kubernetes-version=v1.14.0
	    
	    2) Create a second cluster with Kubernetes 1.14.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210816221144-64872 --kubernetes-version=v1.14.0
	    
	    3) Use the existing cluster at version Kubernetes 1.22.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210816221144-6487 --kubernetes-version=v1.22.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:275: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:277: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210816221144-6487 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:277: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210816221144-6487 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.293768933s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20210816221144-6487" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210816221144-6487
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210816221144-6487: (3.148858744s)
--- PASS: TestKubernetesUpgrade (124.49s)

                                                
                                    
x
+
TestMissingContainerUpgrade (190.23s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:311: (dbg) Run:  /tmp/minikube-v1.9.1.122535413.exe start -p missing-upgrade-20210816221142-6487 --memory=2200 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:311: (dbg) Done: /tmp/minikube-v1.9.1.122535413.exe start -p missing-upgrade-20210816221142-6487 --memory=2200 --driver=docker  --container-runtime=crio: (1m50.243730893s)
version_upgrade_test.go:320: (dbg) Run:  docker stop missing-upgrade-20210816221142-6487
version_upgrade_test.go:320: (dbg) Done: docker stop missing-upgrade-20210816221142-6487: (10.469375226s)
version_upgrade_test.go:325: (dbg) Run:  docker rm missing-upgrade-20210816221142-6487
version_upgrade_test.go:331: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20210816221142-6487 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:331: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20210816221142-6487 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m6.164835772s)
helpers_test.go:176: Cleaning up "missing-upgrade-20210816221142-6487" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20210816221142-6487
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20210816221142-6487: (2.556095684s)
--- PASS: TestMissingContainerUpgrade (190.23s)

                                                
                                    
x
+
TestPause/serial/Start (103.02s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:77: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210816221349-6487 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:77: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210816221349-6487 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m43.016330071s)
--- PASS: TestPause/serial/Start (103.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:213: (dbg) Run:  out/minikube-linux-amd64 start -p false-20210816221528-6487 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:213: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-20210816221528-6487 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (247.396187ms)

                                                
                                                
-- stdout --
	* [false-20210816221528-6487] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 22:15:28.126113  199224 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:15:28.126182  199224 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:15:28.126185  199224 out.go:311] Setting ErrFile to fd 2...
	I0816 22:15:28.126188  199224 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:15:28.126287  199224 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:15:28.126540  199224 out.go:305] Setting JSON to false
	I0816 22:15:28.162135  199224 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-13","uptime":3295,"bootTime":1629148833,"procs":278,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:15:28.162238  199224 start.go:121] virtualization: kvm guest
	I0816 22:15:28.164977  199224 out.go:177] * [false-20210816221528-6487] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:15:28.165113  199224 notify.go:169] Checking for updates...
	I0816 22:15:28.166584  199224 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:15:28.168014  199224 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:15:28.169427  199224 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:15:28.170951  199224 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:15:28.171406  199224 config.go:177] Loaded profile config "cert-options-20210816221525-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 22:15:28.171488  199224 config.go:177] Loaded profile config "pause-20210816221349-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0816 22:15:28.171553  199224 config.go:177] Loaded profile config "running-upgrade-20210816221326-6487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0816 22:15:28.171592  199224 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:15:28.224304  199224 docker.go:132] docker version: linux-19.03.15
	I0816 22:15:28.224417  199224 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 22:15:28.314632  199224 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:56 SystemTime:2021-08-16 22:15:28.262002035 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0816 22:15:28.314726  199224 docker.go:244] overlay module found
	I0816 22:15:28.317213  199224 out.go:177] * Using the docker driver based on user configuration
	I0816 22:15:28.317236  199224 start.go:278] selected driver: docker
	I0816 22:15:28.317242  199224 start.go:751] validating driver "docker" against <nil>
	I0816 22:15:28.317260  199224 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0816 22:15:28.317323  199224 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0816 22:15:28.317369  199224 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0816 22:15:28.318961  199224 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0816 22:15:28.321174  199224 out.go:177] 
	W0816 22:15:28.321299  199224 out.go:242] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0816 22:15:28.322833  199224 out.go:177] 

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "false-20210816221528-6487" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-20210816221528-6487
--- PASS: TestNetworkPlugins/group/false (0.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (104.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210816221528-6487 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210816221528-6487 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.14.0: (1m44.287418986s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (104.29s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.65s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:89: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210816221349-6487 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:89: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210816221349-6487 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.633301981s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (115.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20210816221555-6487 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210816221555-6487 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0: (1m55.848218163s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (115.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210816221528-6487 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [b506490c-fedf-11eb-bd03-0242afc9b4e0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [b506490c-fedf-11eb-bd03-0242afc9b4e0] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.529620521s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210816221528-6487 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20210816221528-6487 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context old-k8s-version-20210816221528-6487 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (20.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20210816221528-6487 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20210816221528-6487 --alsologtostderr -v=3: (20.739393434s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (20.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210816221528-6487 -n old-k8s-version-20210816221528-6487
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210816221528-6487 -n old-k8s-version-20210816221528-6487: exit status 7 (90.430638ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20210816221528-6487 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (676.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210816221528-6487 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210816221528-6487 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.14.0: (11m15.604656958s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210816221528-6487 -n old-k8s-version-20210816221528-6487
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (676.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210816221555-6487 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [ff0e86da-0c8f-4c7c-abee-3efac4dbda3c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:343: "busybox" [ff0e86da-0c8f-4c7c-abee-3efac4dbda3c] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.012147848s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210816221555-6487 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20210816221555-6487 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context no-preload-20210816221555-6487 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (20.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20210816221555-6487 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20210816221555-6487 --alsologtostderr -v=3: (20.791475815s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (20.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210816221555-6487 -n no-preload-20210816221555-6487
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210816221555-6487 -n no-preload-20210816221555-6487: exit status 7 (91.459182ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20210816221555-6487 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (351.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20210816221555-6487 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0
E0816 22:18:20.659753    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816215050-6487/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210816221555-6487 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0: (5m51.317203376s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210816221555-6487 -n no-preload-20210816221555-6487
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (351.67s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.83s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:118: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20210816221349-6487 --alsologtostderr -v=5
E0816 22:19:11.452035    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/client.crt: no such file or directory
--- PASS: TestPause/serial/Unpause (0.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (92.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210816221913-6487 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210816221913-6487 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.3: (1m32.128640929s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (92.13s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.64s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:129: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20210816221349-6487 --alsologtostderr -v=5
pause_test.go:129: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20210816221349-6487 --alsologtostderr -v=5: (3.637046417s)
--- PASS: TestPause/serial/DeletePaused (3.64s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (2.61s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:139: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:139: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (2.527596844s)
pause_test.go:165: (dbg) Run:  docker ps -a
pause_test.go:170: (dbg) Run:  docker volume inspect pause-20210816221349-6487
pause_test.go:170: (dbg) Non-zero exit: docker volume inspect pause-20210816221349-6487: exit status 1 (36.927801ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20210816221349-6487

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyDeletedResources (2.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (52.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210816221939-6487 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.3
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210816221939-6487 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.3: (52.880189699s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (52.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210816221939-6487 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [8bbfc8bf-afbc-41e1-92df-54d7a582bf27] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [8bbfc8bf-afbc-41e1-92df-54d7a582bf27] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 8.011047878s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210816221939-6487 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20210816221939-6487 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context default-k8s-different-port-20210816221939-6487 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (20.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20210816221939-6487 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20210816221939-6487 --alsologtostderr -v=3: (20.751296301s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (20.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210816221913-6487 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [4d92ee86-d65c-41e0-a788-62c5935c3ac3] Pending
helpers_test.go:343: "busybox" [4d92ee86-d65c-41e0-a788-62c5935c3ac3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [4d92ee86-d65c-41e0-a788-62c5935c3ac3] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.010511111s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210816221913-6487 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20210816221913-6487 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context embed-certs-20210816221913-6487 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (20.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20210816221913-6487 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20210816221913-6487 --alsologtostderr -v=3: (20.753988923s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (20.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210816221939-6487 -n default-k8s-different-port-20210816221939-6487
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210816221939-6487 -n default-k8s-different-port-20210816221939-6487: exit status 7 (90.747771ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20210816221939-6487 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (344.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210816221939-6487 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210816221939-6487 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.3: (5m43.839760166s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210816221939-6487 -n default-k8s-different-port-20210816221939-6487
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (344.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210816221913-6487 -n embed-certs-20210816221913-6487
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210816221913-6487 -n embed-certs-20210816221913-6487: exit status 7 (91.065697ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20210816221913-6487 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (380.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210816221913-6487 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.3
E0816 22:23:20.659881    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816215050-6487/client.crt: no such file or directory
E0816 22:24:11.464034    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214127-6487/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210816221913-6487 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.3: (6m20.413296127s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210816221913-6487 -n embed-certs-20210816221913-6487
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (380.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-v5svh" [fe73698d-9e64-4203-a610-46f770420f20] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-v5svh" [fe73698d-9e64-4203-a610-46f770420f20] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.018223117s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-v5svh" [fe73698d-9e64-4203-a610-46f770420f20] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007059842s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context no-preload-20210816221555-6487 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20210816221555-6487 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210816222436-6487 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210816222436-6487 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0: (48.809317154s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20210816222436-6487 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:184: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (20.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20210816222436-6487 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20210816222436-6487 --alsologtostderr -v=3: (20.773767067s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (20.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210816222436-6487 -n newest-cni-20210816222436-6487
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210816222436-6487 -n newest-cni-20210816222436-6487: exit status 7 (90.93171ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20210816222436-6487 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (25.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210816222436-6487 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210816222436-6487 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0: (24.964883875s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210816222436-6487 -n newest-cni-20210816222436-6487
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (25.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:246: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:257: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20210816222436-6487 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-jmcw9" [0a7f8825-8975-4257-a108-92592ad8f017] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014446607s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-jmcw9" [0a7f8825-8975-4257-a108-92592ad8f017] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006057627s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context default-k8s-different-port-20210816221939-6487 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20210816221939-6487 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-587mw" [86ad5b58-9bf6-4eb2-893e-ef10cae1786f] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01055077s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-587mw" [86ad5b58-9bf6-4eb2-893e-ef10cae1786f] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006474553s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context embed-certs-20210816221913-6487 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20210816221913-6487 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (98.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20210816221527-6487 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=crio
E0816 22:28:12.310965    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816221555-6487/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p auto-20210816221527-6487 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=crio: (1m38.826877624s)
--- PASS: TestNetworkPlugins/group/auto/Start (98.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (97.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20210816221528-6487 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=crio
E0816 22:28:20.660043    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816215050-6487/client.crt: no such file or directory
E0816 22:28:32.792142    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816221555-6487/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20210816221528-6487 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=crio: (1m37.638272817s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (97.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-5d8978d65d-kvm5k" [95dc904d-fee0-11eb-938e-0242c0a83a02] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01265309s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (53.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20210816221527-6487 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20210816221527-6487 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=crio: (53.187502718s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (53.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-5d8978d65d-kvm5k" [95dc904d-fee0-11eb-938e-0242c0a83a02] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005347277s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context old-k8s-version-20210816221528-6487 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20210816221528-6487 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20210816221527-6487 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context auto-20210816221527-6487 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-vnf2f" [5630d8e0-eafb-45f0-bc83-0e982a3ffde1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-vnf2f" [5630d8e0-eafb-45f0-bc83-0e982a3ffde1] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.006049703s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:343: "kindnet-c7m6z" [4f09a859-acb9-4b08-82db-31c22870e25d] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.014136718s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20210816221527-6487 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context enable-default-cni-20210816221527-6487 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-28cq8" [a6785707-76fd-459b-a7f3-9f72bdaf55cf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-28cq8" [a6785707-76fd-459b-a7f3-9f72bdaf55cf] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.005808079s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20210816221528-6487 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context kindnet-20210816221528-6487 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-gbws5" [4547ffa5-12a6-4ec3-a693-f61b5ccb4c77] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-gbws5" [4547ffa5-12a6-4ec3-a693-f61b5ccb4c77] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.006590519s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:162: (dbg) Run:  kubectl --context auto-20210816221527-6487 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:181: (dbg) Run:  kubectl --context auto-20210816221527-6487 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:231: (dbg) Run:  kubectl --context auto-20210816221527-6487 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210816221527-6487 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:181: (dbg) Run:  kubectl --context enable-default-cni-20210816221527-6487 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (61.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20210816221527-6487 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20210816221527-6487 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=crio: (1m1.982270856s)
--- PASS: TestNetworkPlugins/group/bridge/Start (61.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:231: (dbg) Run:  kubectl --context enable-default-cni-20210816221527-6487 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (83.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20210816221528-6487 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20210816221528-6487 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=crio: (1m23.371559463s)
--- PASS: TestNetworkPlugins/group/cilium/Start (83.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:162: (dbg) Run:  kubectl --context kindnet-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:181: (dbg) Run:  kubectl --context kindnet-20210816221528-6487 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:231: (dbg) Run:  kubectl --context kindnet-20210816221528-6487 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (70.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20210816221528-6487 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=crio
E0816 22:30:32.482043    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816221939-6487/client.crt: no such file or directory
E0816 22:30:32.487315    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816221939-6487/client.crt: no such file or directory
E0816 22:30:32.497529    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816221939-6487/client.crt: no such file or directory
E0816 22:30:32.517790    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816221939-6487/client.crt: no such file or directory
E0816 22:30:32.558012    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816221939-6487/client.crt: no such file or directory
E0816 22:30:32.638417    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816221939-6487/client.crt: no such file or directory
E0816 22:30:32.889342    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816221939-6487/client.crt: no such file or directory
E0816 22:30:33.209856    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816221939-6487/client.crt: no such file or directory
E0816 22:30:33.850449    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816221939-6487/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p custom-weave-20210816221528-6487 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=crio: (1m10.534248s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (70.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (74.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20210816221528-6487 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=crio
E0816 22:30:42.812292    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816221939-6487/client.crt: no such file or directory
E0816 22:30:53.052955    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816221939-6487/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p calico-20210816221528-6487 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=crio: (1m14.458544075s)
--- PASS: TestNetworkPlugins/group/calico/Start (74.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20210816221527-6487 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context bridge-20210816221527-6487 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-stslx" [63da7390-652a-47ed-9684-59b8ab4b78d6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-stslx" [63da7390-652a-47ed-9684-59b8ab4b78d6] Running
E0816 22:31:13.534139    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816221939-6487/client.crt: no such file or directory
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.204427024s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Run:  kubectl --context bridge-20210816221527-6487 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:181: (dbg) Run:  kubectl --context bridge-20210816221527-6487 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:231: (dbg) Run:  kubectl --context bridge-20210816221527-6487 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-weave-20210816221528-6487 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (10.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context custom-weave-20210816221528-6487 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-tp7ns" [3c749ea3-bc6d-470e-9df3-7be21d45497d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-tp7ns" [3c749ea3-bc6d-470e-9df3-7be21d45497d] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 10.009350655s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (10.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:343: "cilium-6d7xl" [1da9de2b-92dd-4715-a458-d229f5057837] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.960073699s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20210816221528-6487 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context cilium-20210816221528-6487 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-5dxzz" [85cdafa4-4387-424c-8f79-f15eef9f9b10] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-5dxzz" [85cdafa4-4387-424c-8f79-f15eef9f9b10] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 11.05373592s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:162: (dbg) Run:  kubectl --context cilium-20210816221528-6487 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:181: (dbg) Run:  kubectl --context cilium-20210816221528-6487 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:231: (dbg) Run:  kubectl --context cilium-20210816221528-6487 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:343: "calico-node-x8f2n" [e2daeb98-bbf2-4c29-983f-08c16f8302cf] Running
E0816 22:31:54.495267    6487 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-3349-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816221939-6487/client.crt: no such file or directory
net_test.go:106: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.012929689s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-20210816221528-6487 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context calico-20210816221528-6487 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-htvnr" [1d7c8da8-28cd-4ab3-a975-3f764151d0cf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-htvnr" [1d7c8da8-28cd-4ab3-a975-3f764151d0cf] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.006234471s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.27s)

                                                
                                    

Test skip (24/262)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.21.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.21.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.21.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:115: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:188: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:467: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:527: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:39: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:43: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.54s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:91: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20210816221938-6487" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20210816221938-6487
--- SKIP: TestStartStop/group/disable-driver-mounts (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:88: Skipping the test as crio container runtimes requires CNI
helpers_test.go:176: Cleaning up "kubenet-20210816221527-6487" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-20210816221527-6487
--- SKIP: TestNetworkPlugins/group/kubenet (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:76: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:176: Cleaning up "flannel-20210816221527-6487" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-20210816221527-6487
--- SKIP: TestNetworkPlugins/group/flannel (0.37s)

                                                
                                    
Copied to clipboard